id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
af60462881b2d723adeb4acb5fbc07ea27b6bde2
af60462881b2d723adeb4acb5fbc07ea27b6bde2_0
Q: What patterns were discovered from the stories? Text: Introduction Sexual violence, including harassment, is a pervasive, worldwide problem with a long history. This global problem has finally become a mainstream issue thanks to the efforts of survivors and advocates. Statistics show that girls and women are put at high risk of experiencing harassment. Women have about a 3 in 5 chance of experiencing sexual harassment, whereas men have slightly less than 1 in 5 chance BIBREF0, BIBREF1, BIBREF2. While women in developing countries are facing distinct challenges with sexual violence BIBREF3, however sexual violence is ubiquitous. In the United States, for example, there are on average >300,000 people who are sexually assaulted every year BIBREF4. Additionally, these numbers could be underestimated, due to reasons like guilt, blame, doubt and fear, which stopped many survivors from reporting BIBREF5. Social media can be a more open and accessible channel for those who have experienced harassment to be empowered to freely share their traumatic experiences and to raise awareness of the vast scale of sexual harassment, which then allows us to understand and actively address abusive behavior as part of larger efforts to prevent future sexual harassment. The deadly gang rape of a medical student on a Delhi bus in 2012 was a catalyst for protest and action, including the development of Safecity, which uses online and mobile technology to work towards ending sexual harassment and assault. More recently, the #MeToo and #TimesUp movements, further demonstrate how reporting personal stories on social media can raise awareness and empower women. Millions of people around the world have come forward and shared their stories. Instead of being bystanders, more and more people become up-standers, who take action to protest against sexual harassment online. The stories of people who experienced harassment can be studied to identify different patterns of sexual harassment, which can enable solutions to be developed to make streets safer and to keep women and girls more secure when navigating city spaces BIBREF6. In this paper, we demonstrated the application of natural language processing (NLP) technologies to uncover harassment patterns from social media data. We made three key contributions: 1. Safecity is the largest publicly-available online forum for reporting sexual harassment BIBREF6. We annotated about 10,000 personal stories from Safecity with the key elements, including information of harasser (i.e. the words describing the harasser), time, location and the trigger words (i.e. the phrases indicate the harassment that occurred). The key elements are important for studying the patterns of harassment and victimology BIBREF5, BIBREF7. Furthermore, we also associated each story with five labels that characterize the story in multiple dimensions (i.e. age of harasser, single/multiple harasser(s), type of harasser, type of location and time of day). The annotation data are available online. 2. We proposed joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) BIBREF9, BIBREF10 as basic units. Our models can automatically extract the key elements from the sexual harassment stories and at the same time categorize the stories in different dimensions. The proposed models outperformed the single task models, and achieved higher than previously reported accuracy in classifications of harassment forms BIBREF6. 3. We uncovered significant patterns from the categorized sexual harassment stories. Related Work Conventional surveys and reports are often used to study sexual harassment, but harassment on these is usually under-reported BIBREF2, BIBREF5. The high volume of social media data available online can provide us a much larger collection of firsthand stories of sexual harassment. Social media data has already been used to analyze and predict distinct societal and health issues, in order to improve the understanding of wide-reaching societal concerns, including mental health, detecting domestic abuse, and cyberbullying BIBREF11, BIBREF12, BIBREF13, BIBREF14. There are a very limited number of studies on sexual harassment stories shared online. Karlekar and Bansal karlekar2018safecity were the first group to our knowledge that applied NLP to analyze large amount ( $\sim $10,000) of sexual harassment stories. Although their CNN-RNN classification models demonstrated high performance on classifying the forms of harassment, only the top 3 majority forms were studied. In order to study the details of the sexual harassment, the trigger words are crucial. Additionally, research indicated that both situational factors and person (or individual difference) factors contribute to sexual harassment BIBREF15. Therefore, the information about perpetrators needs to be extracted as well as the location and time of events. Karlekar and Bansal karlekar2018safecity applied several visualization techniques in order to capture such information, but it was not obtained explicitly. Our preliminary research demonstrated automatic extraction of key element and story classification in separate steps BIBREF16. In this paper, we proposed joint learning NLP models to directly extract the information of the harasser, time, location and trigger word as key elements and categorize the harassment stories in five dimensions as well. Our approach can provide an avenue to automatically uncover nuanced circumstances informing sexual harassment from online stories. Data Collection and Annotation We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser", “time", “location", “trigger"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below. Age of Harasser: Individual difference such as age can affect harassment behaviors. Therefore, we studied the harassers in two age groups, young and adult. Young people in this paper refer to people in the early 20s or younger. Single/Multiple Harasser(s): Harassers may behave differently in groups than they do alone. Type of Harasser: Person factors in harassment include the common relationships or titles of the harassers. Additionally, the reactions of people who experience harassment may vary with the harassers' relations to themselves BIBREF5. We defined 10 groups with respects to the harassers' relationships or titles. We put conductors and drivers in one group, as they both work on the public transportation. Police and guards are put in the same category, because they are employed to provide security. Manager, supervisors, and colleagues are in the work-related group. The others are described by their names. Type of Location: It will be helpful to reveal the places where harassment most frequently occurs BIBREF7, BIBREF6. We defined 14 types of locations. “Station/stop” refers to places where people wait for public transportation or buy tickets. Private places include survivors' or harassers' home, places of parties and etc. The others are described by their names. Time of Day: The time of an incident may be reported as “in evening” or at a specific time, e.g. “10 pm”. We considered that 5 am to 6 pm as day time, and the rest of the day as the night. Because many of the stories collected are short, many do not contain all of the key elements. For example, “A man came near to her tried to be physical with her .”. The time and location are unknown from the story. In addition, the harassers were strangers to those they harassed in many cases. For instance, “My friend was standing in the queue to pay bill and was ogled by a group of boys.”, we can only learn that there were multiple young harassers, but the type of harasser is unclear. The missing information is hence marked as “unspecified”. It is different from the label “other", which means the information is provided but the number of them is too small to be represented by a group, for example, a “trader”. All the data were labeled by two annotators with training. Inter-rater agreement was measured by Cohen's kappa coefficient, ranging from 0.71 to 0.91 for classifications in different dimensions and 0.75 for key element extraction (details can refer to Table 1 in supplementary file). The disagreements were reviewed by a third annotator and a final decision was made. Proposed Models The key elements can be very informative when categorizing the incidents. For instance, in Figure 1, with identified key elements, one can easily categorize the incident in dimensions of “age of harasser” (adult), “single/multiple harasser(s)” (single), “type of harasser” (unspecified), “type of location” (park) , “time of day” (day time). Therefore, we proposed two joint learning schemes to extract the key elements and categorize the incidents together. In the models' names, “J”, “A”, “SA” stand for joint learning, attention, and supervised attention, respectively. Proposed Models ::: CNN Based Joint Learning Models In Figure FIGREF6, the first proposed structure consists of two layers of CNN modules. J-CNN: To predict the type of key element, it is essential for the CNN model to capture the context information around each word. Therefore, the word along with its surrounding context of a fixed window size was converted into a context sequence. Assuming a window size of $2l + 1$ around the target word $w_0$, the context sequence is $[(w_{-l}, w_{-l+1},...w_0, ...w_{l-1},w_l)]$, where $w_i (i \in [-l,l])$ stands for the $ith$ word from $w_0$. Because the context of the two consecutive words in the original text are only off by one position, it will be difficult for the CNN model to detect the difference. Therefore, the position of each word in this context sequence is crucial information for the CNN model to make the correct predictions BIBREF17. That position was embedded as a $p$ dimensional vector, where $p$ is a hyperparameter. The position embeddings were learned at the training stage. Each word in the original text was then converted into a sequence of the concatenation of word and position embeddings. Such sequence was fed into the CNN modules in the first layer of the model, which output the high level word representation ($h_i, i\in [0,n-1]$, where n is the number of input words). The high level word representation was then passed into a fully connected layer, to predict the key element type for the word. The CNN modules in this layer share the same parameters. We input the sequence of high level word representations ($h_i$) from the first layer into another layer of multiple CNN modules to categorize the harassment incident in each dimension (Figure FIGREF6). Inside each CNN module, the sequence of word representations were first passed through a convolution layer to generate a sequence of new feature vectors ($C =[c_0,c_1,...c_q]$). This vector sequence ($C$) was then fed into a max pooling layer. This is followed by a fully connected layer. Modules in this layer do not share parameters across classification tasks. J-ACNN: We also experimented with attentive pooling, by replacing the max pooling layer. The attention layer aggregates the sequence of feature vectors ($C$) by measuring the contribution of each vector to form the high level representation of the harassment story. Specifically, That is, a fully connected layer with non-linear activation was applied to each vector $c_{i}$ to get its hidden representation $u_{i}$. The similarity of $u_{i}$ with a context vector $u_{w}$ was measured and get normalized through a softmax function, as the importance weight $\alpha _{i}$. The final representation of the incident story $v$ was an aggregation of all the feature vectors weighted by $\alpha _{i}$. $W_{\omega }$, $b_{\omega }$ and $u_{w}$ were learned during training. The final representation ($v$) was passed into one fully connected layer for each classification task. We also applied different attention layers for different classifications, because the classification modules categorize the incident in different dimensions, their focuses vary. For example, to classify “time of day”, one needs to focus on the time phrases, but pays more attention to harassers when classifying “age of harasser”. J-SACNN: To further exploit the information of the key elements, we applied supervision BIBREF18 to the attentive pooling layer, with the annotated key element types of the words as ground truth. For instance, in classification of “age of harasser”, the ground truth attention labels for words with key element types of “harasser” are 1 and others are 0. To conform to the CNN structure, we applied convolution to the sequence of ground truth attention labels, with the same window size ($w$) that was applied to the word sequence (Eq. DISPLAY_FORM11). where $\circ $ is element-wise multiplication, $e_t$ is the ground truth attention label, and the $W \in R^{w\times 1}$ is a constant matrix with all elements equal to 1. $\alpha ^{*}$ was normalized through a softmax function and used as ground truth weight values of the vector sequence ($C$) output from the convolution layer. The loss was calculated between learned attention $\alpha $ and $\alpha ^{*}$ (Eq. DISPLAY_FORM12), and added to the total loss. Proposed Models ::: BiLSTM Based Joint Learning Models J-BiLSTM: The model input the sequence of word embeddings to the BiLSTM layer. To extract key elements, the hidden states from the forward and backward LSTM cells were concatenated and used as word representations to predict the key element types. To classify the harassment story in different dimensions, concatenation of the forward and backward final states of BiLSTM layer was used as document level representation of the story. J-ABiLSTM: We also experimented on BiLSTM model with the attention layer to aggregate the outputs from BiLSTM layer (Figure FIGREF7). The aggregation of the outputs was used as document level representation. J-SABiLSTM: Similarly, we experimented with the supervised attention. In all the models, softmax function was used to calculate the probabilities at the prediction step, and the cross entropy losses from extraction and classification tasks were added together. In case of supervised attention, the loss defined in Eq. DISPLAY_FORM12 was added to the total loss as well. We applied the stochastic gradient descent algorithm with mini-batches and the AdaDelta update Rule (rho=0.95 and epsilon=1e-6) BIBREF19, BIBREF20. The gradients were computed using back-propagation. During training, we also optimized the word and position embeddings. Experiments and Results ::: Experimental Settings Data Splits: We used the same splits of train, develop, and test sets used by Karlekar and Bansal BIBREF6, with 7201, 990 and 1701 stories, respectively. In this study, we only considered single label classifications. Baseline Models: CNN and BiLSTM models that perform classification and extraction separately were used as baseline models. In classification, we also experimented with BiLSTM with the attention layer. To demonstrate that the improvement came from joint learning structure rather the two layer structure in J-CNN, we investigated the same model structure without training on key element extraction. We use J-CNN* to denote it. Preprocess: All the texts were converted to lowercase and preprocessed by removing non-alphanumeric characters, excluding “. ! ? ” . The word embeddings were pre-trained using fastText BIBREF21 with dimension equaling 100. Hyperparameters: For the CNN model, the filter size was chosen to be (1,2,3,4), with 50 filters per filter size. Batch size was set to 50 and the dropout rate was 0.5. The BiLSTM model comprises two layers of one directional LSTM. Every LSTM cell has 50 hidden units. The dropout rate was 0.25. Attention size was 50. Experiments and Results ::: Results and Discussions We compared joint learning models with the single task models. Results are averages from five experiments. Although not much improvement was achieved in key element extraction (Figure TABREF16), classification performance improved significantly with joint learning schemes (Table TABREF17). Significance t-test results are shown in Table 2 in the supplementary file. BiLSTM Based Models: Joint learning BiLSTM with attention outperformed single task BiLSTM models. One reason is that it directed the attention of the model to the correct part of the text. For example, S1: “ foogreen!1.7003483371809125 foowhen foogreen!3.4324652515351772 fooi foogreen!10.76661329716444 foowas foogreen!20.388443022966385 fooreturning foogreen!9.704475291073322 foomy foogreen!6.052316632121801 foohome foogreen!2.477810252457857 fooafter foogreen!3.5612427163869143 foofinishing foogreen!4.7736018896102905 foomy foogreen!4.634172189980745 fooclass foogreen!0.6899426807649434 foo. foogreen!0.35572052001953125 fooi foogreen!0.3427551419008523 foowas foogreen!0.293194578262046 fooin foogreen!0.2028885210165754 fooqueue foogreen!0.10553237370913848 footo foogreen!0.19472737039905041 fooget foogreen!0.44946340494789183 fooon foogreen!0.5511227645911276 foothe foogreen!2.056689700111747 foomicro foogreen!2.597035141661763 foobus foogreen!2.5683704297989607 fooand foogreen!4.6382867731153965 foothere foogreen!9.827975183725357 foowas foogreen!21.346069872379303 fooa foogreen!22.295180708169937 foogirl foogreen!11.672522872686386 fooopposite foogreen!8.892465382814407 footo foogreen!18.20233091711998 foome foogreen!13.192926533520222 foojust foogreen!26.24184638261795 foothen foogreen!40.2555949985981 fooa foogreen!30.108729377388954 fooyoung foogreen!115.02625793218613 fooman foogreen!93.40204298496246 footried foogreen!58.68498980998993 footo foogreen!144.01434361934662 footouch foogreen!108.82275551557541 fooher foogreen!80.9452086687088 fooon foogreen!47.26015031337738 foothe foogreen!47.71501570940018 foobreast foogreen!19.392695277929306 foo.” S2: “ foogreen!0.2212507533840835 foowhen foogreen!0.26129744946956635 fooi foogreen!0.3014186804648489 foowas foogreen!0.314583390718326 fooreturning foogreen!0.23829322890378535 foomy foogreen!0.018542312318459153 foohome foogreen!0.06052045864635147 fooafter foogreen!0.3865368489641696 foofinishing foogreen!0.5127551266923547 foomy foogreen!0.569560332223773 fooclass foogreen!0.037081812479300424 foo. foogreen!0.061129467212595046 fooi foogreen!0.12043083552271128 foowas foogreen!0.2053432835964486 fooin foogreen!0.038308095099637285 fooqueue foogreen!0.05270353358355351 footo foogreen!0.07939991337480024 fooget foogreen!0.14962266141083091 fooon foogreen!0.11444976553320885 foothe foogreen!0.013002995729038958 foomicro foogreen!0.016201976904994808 foobus foogreen!0.14046543219592422 fooand foogreen!0.12413455988280475 foothere foogreen!0.18423641449771821 foowas foogreen!0.3394613158889115 fooa foogreen!1.0372470133006573 foogirl foogreen!0.20553644571918994 fooopposite foogreen!0.2821453963406384 footo foogreen!0.5574009846895933 foome foogreen!0.2709480468183756 foojust foogreen!0.2582515007816255 foothen foogreen!0.9223996312357485 fooa foogreen!788.9420390129089 fooyoung foogreen!199.1765946149826 fooman foogreen!0.39259070763364434 footried foogreen!0.27069455245509744 footo foogreen!0.5092779756523669 footouch foogreen!0.7033208385109901 fooher foogreen!0.6793316570110619 fooon foogreen!0.5892394692637026 foothe foogreen!0.4084075626451522 foobreast foogreen!0.14951340563129634 foo.” S3: “ foogreen!0.23944019631017 foowhen foogreen!0.16698541003279388 fooi foogreen!0.3381385176908225 foowas foogreen!0.21315943740773946 fooreturning foogreen!0.3222442464902997 foomy foogreen!0.8483575657010078 foohome foogreen!0.10339960863348097 fooafter foogreen!0.2440519310766831 foofinishing foogreen!0.39699181797914207 foomy foogreen!1.2218113988637924 fooclass foogreen!0.1232976937899366 foo. foogreen!0.10928708070423454 fooi foogreen!0.2562549489084631 foowas foogreen!0.8099888218566775 fooin foogreen!2.9650430660694838 fooqueue foogreen!0.507337914314121 footo foogreen!0.727736041881144 fooget foogreen!0.7367140497080982 fooon foogreen!0.711284636054188 foothe foogreen!194.2763775587082 foomicro foogreen!786.8869304656982 foobus foogreen!0.4422159108798951 fooand foogreen!0.43104542419314384 foothere foogreen!0.4694198723882437 foowas foogreen!0.5085613229312003 fooa foogreen!0.4430979897733778 foogirl foogreen!0.36199347232468426 fooopposite foogreen!0.31067250529304147 footo foogreen!0.2927705936599523 foome foogreen!0.24646619567647576 foojust foogreen!0.23911069729365408 foothen foogreen!0.11775700113503262 fooa foogreen!0.002219072712250636 fooyoung foogreen!0.0019248132048232947 fooman foogreen!0.32698659924790263 footried foogreen!0.3118939639534801 footo foogreen!0.5727249081246555 footouch foogreen!0.5670131067745388 fooher foogreen!0.7104063988663256 fooon foogreen!0.6698771030642092 foothe foogreen!0.4756081907544285 foobreast foogreen!0.26600153069011867 foo.” In S1, the regular BiLSTM with attention model for classification on “age of harasser” put some attention on phrases other than the harasser, and hence aggregated noise. This could explain why the regular BiLSTM model got lower performance than the CNN model. However, when training with key element extractions, it put almost all attention on the harasser “young man” (S2), which helped the model make correct prediction of “young harasser”. When predicting the “type of location” (S3), the joint learning model directed its attention to “micro bus”. CNN Based Models: Since CNN is efficient for capturing the most useful information BIBREF22, it is quite suitable for the classification tasks in this study. It achieved better performance than the BiLSTM model. The joint learning method boosted the performance even higher. This is because the classifications are related to the extracted key elements, and the word representation learned by the first layer of CNNs (Figure FIGREF6) is more informative than word embedding. By plotting of t-SNEs BIBREF23 of the two kinds of word vectors, we can see the word representations in the joint learning model made the words more separable (Figure 1 in supplementary file). In addition, no improvement was found with the J-CNN* model, which demonstrated the joint learning with extraction is essential for the improvement. With supervised attentive pooling, the model can get additional knowledge from key element labels. It helped the model in cases when certain location phrases were mentioned but the incidents did not happen at those locations. For instance, “I was followed on my way home .”, max pooling will very likely to predict it as “private places”. But, it is actually unknown. In other cases, with supervised attentive pooling, the model can distinguish “metro” and “metro station”, which are “transportation” and “stop/station” respectively. Therefore, the model further improved on classifications on “type of location” with supervised attention in terms of macro F1. For some tasks, like “time of day”, there are fewer cases with such disambiguation and hence max pooling worked well. Supervised attention improved macro F1 in location and harasser classifications, because it made more correct predictions in cases that mentioned location and harasser. But the majority did not mention them. Therefore, the accuracy of J-SACNN did not increase, compared with the other models. Classification on Harassment Forms: In Table TABREF18, we also compared the performance of binary classifications on harassment forms with the results reported by Karlekar and Bansal karlekar2018safecity. Joint learning models achieved higher accuracy. In some harassment stories, the whole text or a span of the text consists of trigger words of multiple forms, such as “stare, whistles, start to sing, commenting”. The supervised attention mechanism will force the model to look at all such words rather than just the one related to the harassment form for classification and hence it can introduce noise. This can explain why J-SACNN got lower accuracy in two of the harassment form classifications, compared to J-ACNN. In addition, J-CNN model did best in “ogling” classification. Patterns of Sexual Harassment We plotted the distribution of harassment incidents in each categorization dimension (Figure FIGREF19). It displays statistics that provide important evidence as to the scale of harassment and that can serve as the basis for more effective interventions to be developed by authorities ranging from advocacy organizations to policy makers. It provides evidence to support some commonly assumed factors about harassment: First, we demonstrate that harassment occurred more frequently during the night time than the day time. Second, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives. Furthermore, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) (Figure FIGREF20). The significance of the correlation is tested by chi-square independence with p value less than 0.05. Identifying these patterns will enable interventions to be differentiated for and targeted at specific populations. For instance, the young harassers often engage in harassment activities as groups. This points to the influence of peer pressure and masculine behavioral norms for men and boys on these activities. We also found that the majority of young perpetrators engaged in harassment behaviors on the streets. These findings suggest that interventions with young men and boys, who are readily influenced by peers, might be most effective when education is done peer-to-peer. It also points to the locations where such efforts could be made, including both in schools and on the streets. In contrast, we found that adult perpetrators of sexual harassment are more likely to act alone. Most of the adult harassers engaged in harassment on public transportation. These differences in adult harassment activities and locations, mean that interventions should be responsive to these factors. For example, increasing the security measures on transit at key times and locations. In addition, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location (Figure FIGREF21). For example, young harassers are more likely to engage in behaviors of verbal harassment, rather than physical harassment as compared to adults. It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators. In contrast, commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. The nature and location of the harassment are particularly significant in developing strategies for those who are harassed or who witness the harassment to respond and manage the everyday threat of harassment. For example, some strategies will work best on public transport, a particular closed, shared space setting, while other strategies might be more effective on the open space of the street. These results can provide valuable information for all members of the public. Sharing stories of harassment has been found by researchers to shift people’s cognitive and emotional orientation towards their traumatic experiences BIBREF24. Greater awareness of patterns and scale of harassment experiences promises to ensure those who have been subjected to this violence that they are not alone, empowering others to report incidents, and ensuring them that efforts are being made to prevent others from experiencing the same harassment. These results also provide various authorities tools to identify potential harassment patterns and to make more effective interventions to prevent further harassment incidents. For instance, the authorities can increase targeted educational efforts at youth and adults, and be guided in utilizing limited resources the most effectively to offer more safety measures, including policing and community-based responses. For example, focusing efforts on highly populated public transportation during the nighttime, when harassment is found to be most likely to occur. Conclusions We provided a large number of annotated personal stories of sexual harassment. Analyzing and identifying the social patterns of harassment behavior is essential to changing these patterns and social tolerance for them. We demonstrated the joint learning NLP models with strong performances to automatically extract key elements and categorize the stories. Potentiality, the approaches and models proposed in this study can be applied to sexual harassment stories from other sources, which can process and summarize the harassment stories and help those who have experienced harassment and authorities to work faster, such as by automatically filing reports BIBREF6. Furthermore, we discovered meaningful patterns in the situations where harassment commonly occurred. The volume of social media data is huge, and the more we can extract from these data, the more powerful we can be as part of the efforts to build a safer and more inclusive communities. Our work can increase the understanding of sexual harassment in society, ease the processing of such incidents by advocates and officials, and most importantly, raise awareness of this urgent problem. Acknowledgments We thank the Safecity for granting the permission of using the data.
we demonstrate that harassment occurred more frequently during the night time than the day time, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) , We also found that the majority of young perpetrators engaged in harassment behaviors on the streets, we found that adult perpetrators of sexual harassment are more likely to act alone, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location , commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers.
879bec20c0fdfda952444018e9435f91e34d8788
879bec20c0fdfda952444018e9435f91e34d8788_0
Q: Did they use a crowdsourcing platform? Text: Introduction Sexual violence, including harassment, is a pervasive, worldwide problem with a long history. This global problem has finally become a mainstream issue thanks to the efforts of survivors and advocates. Statistics show that girls and women are put at high risk of experiencing harassment. Women have about a 3 in 5 chance of experiencing sexual harassment, whereas men have slightly less than 1 in 5 chance BIBREF0, BIBREF1, BIBREF2. While women in developing countries are facing distinct challenges with sexual violence BIBREF3, however sexual violence is ubiquitous. In the United States, for example, there are on average >300,000 people who are sexually assaulted every year BIBREF4. Additionally, these numbers could be underestimated, due to reasons like guilt, blame, doubt and fear, which stopped many survivors from reporting BIBREF5. Social media can be a more open and accessible channel for those who have experienced harassment to be empowered to freely share their traumatic experiences and to raise awareness of the vast scale of sexual harassment, which then allows us to understand and actively address abusive behavior as part of larger efforts to prevent future sexual harassment. The deadly gang rape of a medical student on a Delhi bus in 2012 was a catalyst for protest and action, including the development of Safecity, which uses online and mobile technology to work towards ending sexual harassment and assault. More recently, the #MeToo and #TimesUp movements, further demonstrate how reporting personal stories on social media can raise awareness and empower women. Millions of people around the world have come forward and shared their stories. Instead of being bystanders, more and more people become up-standers, who take action to protest against sexual harassment online. The stories of people who experienced harassment can be studied to identify different patterns of sexual harassment, which can enable solutions to be developed to make streets safer and to keep women and girls more secure when navigating city spaces BIBREF6. In this paper, we demonstrated the application of natural language processing (NLP) technologies to uncover harassment patterns from social media data. We made three key contributions: 1. Safecity is the largest publicly-available online forum for reporting sexual harassment BIBREF6. We annotated about 10,000 personal stories from Safecity with the key elements, including information of harasser (i.e. the words describing the harasser), time, location and the trigger words (i.e. the phrases indicate the harassment that occurred). The key elements are important for studying the patterns of harassment and victimology BIBREF5, BIBREF7. Furthermore, we also associated each story with five labels that characterize the story in multiple dimensions (i.e. age of harasser, single/multiple harasser(s), type of harasser, type of location and time of day). The annotation data are available online. 2. We proposed joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) BIBREF9, BIBREF10 as basic units. Our models can automatically extract the key elements from the sexual harassment stories and at the same time categorize the stories in different dimensions. The proposed models outperformed the single task models, and achieved higher than previously reported accuracy in classifications of harassment forms BIBREF6. 3. We uncovered significant patterns from the categorized sexual harassment stories. Related Work Conventional surveys and reports are often used to study sexual harassment, but harassment on these is usually under-reported BIBREF2, BIBREF5. The high volume of social media data available online can provide us a much larger collection of firsthand stories of sexual harassment. Social media data has already been used to analyze and predict distinct societal and health issues, in order to improve the understanding of wide-reaching societal concerns, including mental health, detecting domestic abuse, and cyberbullying BIBREF11, BIBREF12, BIBREF13, BIBREF14. There are a very limited number of studies on sexual harassment stories shared online. Karlekar and Bansal karlekar2018safecity were the first group to our knowledge that applied NLP to analyze large amount ( $\sim $10,000) of sexual harassment stories. Although their CNN-RNN classification models demonstrated high performance on classifying the forms of harassment, only the top 3 majority forms were studied. In order to study the details of the sexual harassment, the trigger words are crucial. Additionally, research indicated that both situational factors and person (or individual difference) factors contribute to sexual harassment BIBREF15. Therefore, the information about perpetrators needs to be extracted as well as the location and time of events. Karlekar and Bansal karlekar2018safecity applied several visualization techniques in order to capture such information, but it was not obtained explicitly. Our preliminary research demonstrated automatic extraction of key element and story classification in separate steps BIBREF16. In this paper, we proposed joint learning NLP models to directly extract the information of the harasser, time, location and trigger word as key elements and categorize the harassment stories in five dimensions as well. Our approach can provide an avenue to automatically uncover nuanced circumstances informing sexual harassment from online stories. Data Collection and Annotation We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser", “time", “location", “trigger"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below. Age of Harasser: Individual difference such as age can affect harassment behaviors. Therefore, we studied the harassers in two age groups, young and adult. Young people in this paper refer to people in the early 20s or younger. Single/Multiple Harasser(s): Harassers may behave differently in groups than they do alone. Type of Harasser: Person factors in harassment include the common relationships or titles of the harassers. Additionally, the reactions of people who experience harassment may vary with the harassers' relations to themselves BIBREF5. We defined 10 groups with respects to the harassers' relationships or titles. We put conductors and drivers in one group, as they both work on the public transportation. Police and guards are put in the same category, because they are employed to provide security. Manager, supervisors, and colleagues are in the work-related group. The others are described by their names. Type of Location: It will be helpful to reveal the places where harassment most frequently occurs BIBREF7, BIBREF6. We defined 14 types of locations. “Station/stop” refers to places where people wait for public transportation or buy tickets. Private places include survivors' or harassers' home, places of parties and etc. The others are described by their names. Time of Day: The time of an incident may be reported as “in evening” or at a specific time, e.g. “10 pm”. We considered that 5 am to 6 pm as day time, and the rest of the day as the night. Because many of the stories collected are short, many do not contain all of the key elements. For example, “A man came near to her tried to be physical with her .”. The time and location are unknown from the story. In addition, the harassers were strangers to those they harassed in many cases. For instance, “My friend was standing in the queue to pay bill and was ogled by a group of boys.”, we can only learn that there were multiple young harassers, but the type of harasser is unclear. The missing information is hence marked as “unspecified”. It is different from the label “other", which means the information is provided but the number of them is too small to be represented by a group, for example, a “trader”. All the data were labeled by two annotators with training. Inter-rater agreement was measured by Cohen's kappa coefficient, ranging from 0.71 to 0.91 for classifications in different dimensions and 0.75 for key element extraction (details can refer to Table 1 in supplementary file). The disagreements were reviewed by a third annotator and a final decision was made. Proposed Models The key elements can be very informative when categorizing the incidents. For instance, in Figure 1, with identified key elements, one can easily categorize the incident in dimensions of “age of harasser” (adult), “single/multiple harasser(s)” (single), “type of harasser” (unspecified), “type of location” (park) , “time of day” (day time). Therefore, we proposed two joint learning schemes to extract the key elements and categorize the incidents together. In the models' names, “J”, “A”, “SA” stand for joint learning, attention, and supervised attention, respectively. Proposed Models ::: CNN Based Joint Learning Models In Figure FIGREF6, the first proposed structure consists of two layers of CNN modules. J-CNN: To predict the type of key element, it is essential for the CNN model to capture the context information around each word. Therefore, the word along with its surrounding context of a fixed window size was converted into a context sequence. Assuming a window size of $2l + 1$ around the target word $w_0$, the context sequence is $[(w_{-l}, w_{-l+1},...w_0, ...w_{l-1},w_l)]$, where $w_i (i \in [-l,l])$ stands for the $ith$ word from $w_0$. Because the context of the two consecutive words in the original text are only off by one position, it will be difficult for the CNN model to detect the difference. Therefore, the position of each word in this context sequence is crucial information for the CNN model to make the correct predictions BIBREF17. That position was embedded as a $p$ dimensional vector, where $p$ is a hyperparameter. The position embeddings were learned at the training stage. Each word in the original text was then converted into a sequence of the concatenation of word and position embeddings. Such sequence was fed into the CNN modules in the first layer of the model, which output the high level word representation ($h_i, i\in [0,n-1]$, where n is the number of input words). The high level word representation was then passed into a fully connected layer, to predict the key element type for the word. The CNN modules in this layer share the same parameters. We input the sequence of high level word representations ($h_i$) from the first layer into another layer of multiple CNN modules to categorize the harassment incident in each dimension (Figure FIGREF6). Inside each CNN module, the sequence of word representations were first passed through a convolution layer to generate a sequence of new feature vectors ($C =[c_0,c_1,...c_q]$). This vector sequence ($C$) was then fed into a max pooling layer. This is followed by a fully connected layer. Modules in this layer do not share parameters across classification tasks. J-ACNN: We also experimented with attentive pooling, by replacing the max pooling layer. The attention layer aggregates the sequence of feature vectors ($C$) by measuring the contribution of each vector to form the high level representation of the harassment story. Specifically, That is, a fully connected layer with non-linear activation was applied to each vector $c_{i}$ to get its hidden representation $u_{i}$. The similarity of $u_{i}$ with a context vector $u_{w}$ was measured and get normalized through a softmax function, as the importance weight $\alpha _{i}$. The final representation of the incident story $v$ was an aggregation of all the feature vectors weighted by $\alpha _{i}$. $W_{\omega }$, $b_{\omega }$ and $u_{w}$ were learned during training. The final representation ($v$) was passed into one fully connected layer for each classification task. We also applied different attention layers for different classifications, because the classification modules categorize the incident in different dimensions, their focuses vary. For example, to classify “time of day”, one needs to focus on the time phrases, but pays more attention to harassers when classifying “age of harasser”. J-SACNN: To further exploit the information of the key elements, we applied supervision BIBREF18 to the attentive pooling layer, with the annotated key element types of the words as ground truth. For instance, in classification of “age of harasser”, the ground truth attention labels for words with key element types of “harasser” are 1 and others are 0. To conform to the CNN structure, we applied convolution to the sequence of ground truth attention labels, with the same window size ($w$) that was applied to the word sequence (Eq. DISPLAY_FORM11). where $\circ $ is element-wise multiplication, $e_t$ is the ground truth attention label, and the $W \in R^{w\times 1}$ is a constant matrix with all elements equal to 1. $\alpha ^{*}$ was normalized through a softmax function and used as ground truth weight values of the vector sequence ($C$) output from the convolution layer. The loss was calculated between learned attention $\alpha $ and $\alpha ^{*}$ (Eq. DISPLAY_FORM12), and added to the total loss. Proposed Models ::: BiLSTM Based Joint Learning Models J-BiLSTM: The model input the sequence of word embeddings to the BiLSTM layer. To extract key elements, the hidden states from the forward and backward LSTM cells were concatenated and used as word representations to predict the key element types. To classify the harassment story in different dimensions, concatenation of the forward and backward final states of BiLSTM layer was used as document level representation of the story. J-ABiLSTM: We also experimented on BiLSTM model with the attention layer to aggregate the outputs from BiLSTM layer (Figure FIGREF7). The aggregation of the outputs was used as document level representation. J-SABiLSTM: Similarly, we experimented with the supervised attention. In all the models, softmax function was used to calculate the probabilities at the prediction step, and the cross entropy losses from extraction and classification tasks were added together. In case of supervised attention, the loss defined in Eq. DISPLAY_FORM12 was added to the total loss as well. We applied the stochastic gradient descent algorithm with mini-batches and the AdaDelta update Rule (rho=0.95 and epsilon=1e-6) BIBREF19, BIBREF20. The gradients were computed using back-propagation. During training, we also optimized the word and position embeddings. Experiments and Results ::: Experimental Settings Data Splits: We used the same splits of train, develop, and test sets used by Karlekar and Bansal BIBREF6, with 7201, 990 and 1701 stories, respectively. In this study, we only considered single label classifications. Baseline Models: CNN and BiLSTM models that perform classification and extraction separately were used as baseline models. In classification, we also experimented with BiLSTM with the attention layer. To demonstrate that the improvement came from joint learning structure rather the two layer structure in J-CNN, we investigated the same model structure without training on key element extraction. We use J-CNN* to denote it. Preprocess: All the texts were converted to lowercase and preprocessed by removing non-alphanumeric characters, excluding “. ! ? ” . The word embeddings were pre-trained using fastText BIBREF21 with dimension equaling 100. Hyperparameters: For the CNN model, the filter size was chosen to be (1,2,3,4), with 50 filters per filter size. Batch size was set to 50 and the dropout rate was 0.5. The BiLSTM model comprises two layers of one directional LSTM. Every LSTM cell has 50 hidden units. The dropout rate was 0.25. Attention size was 50. Experiments and Results ::: Results and Discussions We compared joint learning models with the single task models. Results are averages from five experiments. Although not much improvement was achieved in key element extraction (Figure TABREF16), classification performance improved significantly with joint learning schemes (Table TABREF17). Significance t-test results are shown in Table 2 in the supplementary file. BiLSTM Based Models: Joint learning BiLSTM with attention outperformed single task BiLSTM models. One reason is that it directed the attention of the model to the correct part of the text. For example, S1: “ foogreen!1.7003483371809125 foowhen foogreen!3.4324652515351772 fooi foogreen!10.76661329716444 foowas foogreen!20.388443022966385 fooreturning foogreen!9.704475291073322 foomy foogreen!6.052316632121801 foohome foogreen!2.477810252457857 fooafter foogreen!3.5612427163869143 foofinishing foogreen!4.7736018896102905 foomy foogreen!4.634172189980745 fooclass foogreen!0.6899426807649434 foo. foogreen!0.35572052001953125 fooi foogreen!0.3427551419008523 foowas foogreen!0.293194578262046 fooin foogreen!0.2028885210165754 fooqueue foogreen!0.10553237370913848 footo foogreen!0.19472737039905041 fooget foogreen!0.44946340494789183 fooon foogreen!0.5511227645911276 foothe foogreen!2.056689700111747 foomicro foogreen!2.597035141661763 foobus foogreen!2.5683704297989607 fooand foogreen!4.6382867731153965 foothere foogreen!9.827975183725357 foowas foogreen!21.346069872379303 fooa foogreen!22.295180708169937 foogirl foogreen!11.672522872686386 fooopposite foogreen!8.892465382814407 footo foogreen!18.20233091711998 foome foogreen!13.192926533520222 foojust foogreen!26.24184638261795 foothen foogreen!40.2555949985981 fooa foogreen!30.108729377388954 fooyoung foogreen!115.02625793218613 fooman foogreen!93.40204298496246 footried foogreen!58.68498980998993 footo foogreen!144.01434361934662 footouch foogreen!108.82275551557541 fooher foogreen!80.9452086687088 fooon foogreen!47.26015031337738 foothe foogreen!47.71501570940018 foobreast foogreen!19.392695277929306 foo.” S2: “ foogreen!0.2212507533840835 foowhen foogreen!0.26129744946956635 fooi foogreen!0.3014186804648489 foowas foogreen!0.314583390718326 fooreturning foogreen!0.23829322890378535 foomy foogreen!0.018542312318459153 foohome foogreen!0.06052045864635147 fooafter foogreen!0.3865368489641696 foofinishing foogreen!0.5127551266923547 foomy foogreen!0.569560332223773 fooclass foogreen!0.037081812479300424 foo. foogreen!0.061129467212595046 fooi foogreen!0.12043083552271128 foowas foogreen!0.2053432835964486 fooin foogreen!0.038308095099637285 fooqueue foogreen!0.05270353358355351 footo foogreen!0.07939991337480024 fooget foogreen!0.14962266141083091 fooon foogreen!0.11444976553320885 foothe foogreen!0.013002995729038958 foomicro foogreen!0.016201976904994808 foobus foogreen!0.14046543219592422 fooand foogreen!0.12413455988280475 foothere foogreen!0.18423641449771821 foowas foogreen!0.3394613158889115 fooa foogreen!1.0372470133006573 foogirl foogreen!0.20553644571918994 fooopposite foogreen!0.2821453963406384 footo foogreen!0.5574009846895933 foome foogreen!0.2709480468183756 foojust foogreen!0.2582515007816255 foothen foogreen!0.9223996312357485 fooa foogreen!788.9420390129089 fooyoung foogreen!199.1765946149826 fooman foogreen!0.39259070763364434 footried foogreen!0.27069455245509744 footo foogreen!0.5092779756523669 footouch foogreen!0.7033208385109901 fooher foogreen!0.6793316570110619 fooon foogreen!0.5892394692637026 foothe foogreen!0.4084075626451522 foobreast foogreen!0.14951340563129634 foo.” S3: “ foogreen!0.23944019631017 foowhen foogreen!0.16698541003279388 fooi foogreen!0.3381385176908225 foowas foogreen!0.21315943740773946 fooreturning foogreen!0.3222442464902997 foomy foogreen!0.8483575657010078 foohome foogreen!0.10339960863348097 fooafter foogreen!0.2440519310766831 foofinishing foogreen!0.39699181797914207 foomy foogreen!1.2218113988637924 fooclass foogreen!0.1232976937899366 foo. foogreen!0.10928708070423454 fooi foogreen!0.2562549489084631 foowas foogreen!0.8099888218566775 fooin foogreen!2.9650430660694838 fooqueue foogreen!0.507337914314121 footo foogreen!0.727736041881144 fooget foogreen!0.7367140497080982 fooon foogreen!0.711284636054188 foothe foogreen!194.2763775587082 foomicro foogreen!786.8869304656982 foobus foogreen!0.4422159108798951 fooand foogreen!0.43104542419314384 foothere foogreen!0.4694198723882437 foowas foogreen!0.5085613229312003 fooa foogreen!0.4430979897733778 foogirl foogreen!0.36199347232468426 fooopposite foogreen!0.31067250529304147 footo foogreen!0.2927705936599523 foome foogreen!0.24646619567647576 foojust foogreen!0.23911069729365408 foothen foogreen!0.11775700113503262 fooa foogreen!0.002219072712250636 fooyoung foogreen!0.0019248132048232947 fooman foogreen!0.32698659924790263 footried foogreen!0.3118939639534801 footo foogreen!0.5727249081246555 footouch foogreen!0.5670131067745388 fooher foogreen!0.7104063988663256 fooon foogreen!0.6698771030642092 foothe foogreen!0.4756081907544285 foobreast foogreen!0.26600153069011867 foo.” In S1, the regular BiLSTM with attention model for classification on “age of harasser” put some attention on phrases other than the harasser, and hence aggregated noise. This could explain why the regular BiLSTM model got lower performance than the CNN model. However, when training with key element extractions, it put almost all attention on the harasser “young man” (S2), which helped the model make correct prediction of “young harasser”. When predicting the “type of location” (S3), the joint learning model directed its attention to “micro bus”. CNN Based Models: Since CNN is efficient for capturing the most useful information BIBREF22, it is quite suitable for the classification tasks in this study. It achieved better performance than the BiLSTM model. The joint learning method boosted the performance even higher. This is because the classifications are related to the extracted key elements, and the word representation learned by the first layer of CNNs (Figure FIGREF6) is more informative than word embedding. By plotting of t-SNEs BIBREF23 of the two kinds of word vectors, we can see the word representations in the joint learning model made the words more separable (Figure 1 in supplementary file). In addition, no improvement was found with the J-CNN* model, which demonstrated the joint learning with extraction is essential for the improvement. With supervised attentive pooling, the model can get additional knowledge from key element labels. It helped the model in cases when certain location phrases were mentioned but the incidents did not happen at those locations. For instance, “I was followed on my way home .”, max pooling will very likely to predict it as “private places”. But, it is actually unknown. In other cases, with supervised attentive pooling, the model can distinguish “metro” and “metro station”, which are “transportation” and “stop/station” respectively. Therefore, the model further improved on classifications on “type of location” with supervised attention in terms of macro F1. For some tasks, like “time of day”, there are fewer cases with such disambiguation and hence max pooling worked well. Supervised attention improved macro F1 in location and harasser classifications, because it made more correct predictions in cases that mentioned location and harasser. But the majority did not mention them. Therefore, the accuracy of J-SACNN did not increase, compared with the other models. Classification on Harassment Forms: In Table TABREF18, we also compared the performance of binary classifications on harassment forms with the results reported by Karlekar and Bansal karlekar2018safecity. Joint learning models achieved higher accuracy. In some harassment stories, the whole text or a span of the text consists of trigger words of multiple forms, such as “stare, whistles, start to sing, commenting”. The supervised attention mechanism will force the model to look at all such words rather than just the one related to the harassment form for classification and hence it can introduce noise. This can explain why J-SACNN got lower accuracy in two of the harassment form classifications, compared to J-ACNN. In addition, J-CNN model did best in “ogling” classification. Patterns of Sexual Harassment We plotted the distribution of harassment incidents in each categorization dimension (Figure FIGREF19). It displays statistics that provide important evidence as to the scale of harassment and that can serve as the basis for more effective interventions to be developed by authorities ranging from advocacy organizations to policy makers. It provides evidence to support some commonly assumed factors about harassment: First, we demonstrate that harassment occurred more frequently during the night time than the day time. Second, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives. Furthermore, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) (Figure FIGREF20). The significance of the correlation is tested by chi-square independence with p value less than 0.05. Identifying these patterns will enable interventions to be differentiated for and targeted at specific populations. For instance, the young harassers often engage in harassment activities as groups. This points to the influence of peer pressure and masculine behavioral norms for men and boys on these activities. We also found that the majority of young perpetrators engaged in harassment behaviors on the streets. These findings suggest that interventions with young men and boys, who are readily influenced by peers, might be most effective when education is done peer-to-peer. It also points to the locations where such efforts could be made, including both in schools and on the streets. In contrast, we found that adult perpetrators of sexual harassment are more likely to act alone. Most of the adult harassers engaged in harassment on public transportation. These differences in adult harassment activities and locations, mean that interventions should be responsive to these factors. For example, increasing the security measures on transit at key times and locations. In addition, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location (Figure FIGREF21). For example, young harassers are more likely to engage in behaviors of verbal harassment, rather than physical harassment as compared to adults. It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators. In contrast, commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. The nature and location of the harassment are particularly significant in developing strategies for those who are harassed or who witness the harassment to respond and manage the everyday threat of harassment. For example, some strategies will work best on public transport, a particular closed, shared space setting, while other strategies might be more effective on the open space of the street. These results can provide valuable information for all members of the public. Sharing stories of harassment has been found by researchers to shift people’s cognitive and emotional orientation towards their traumatic experiences BIBREF24. Greater awareness of patterns and scale of harassment experiences promises to ensure those who have been subjected to this violence that they are not alone, empowering others to report incidents, and ensuring them that efforts are being made to prevent others from experiencing the same harassment. These results also provide various authorities tools to identify potential harassment patterns and to make more effective interventions to prevent further harassment incidents. For instance, the authorities can increase targeted educational efforts at youth and adults, and be guided in utilizing limited resources the most effectively to offer more safety measures, including policing and community-based responses. For example, focusing efforts on highly populated public transportation during the nighttime, when harassment is found to be most likely to occur. Conclusions We provided a large number of annotated personal stories of sexual harassment. Analyzing and identifying the social patterns of harassment behavior is essential to changing these patterns and social tolerance for them. We demonstrated the joint learning NLP models with strong performances to automatically extract key elements and categorize the stories. Potentiality, the approaches and models proposed in this study can be applied to sexual harassment stories from other sources, which can process and summarize the harassment stories and help those who have experienced harassment and authorities to work faster, such as by automatically filing reports BIBREF6. Furthermore, we discovered meaningful patterns in the situations where harassment commonly occurred. The volume of social media data is huge, and the more we can extract from these data, the more powerful we can be as part of the efforts to build a safer and more inclusive communities. Our work can increase the understanding of sexual harassment in society, ease the processing of such incidents by advocates and officials, and most importantly, raise awareness of this urgent problem. Acknowledgments We thank the Safecity for granting the permission of using the data.
Unanswerable
3c378074111a6cc7319c0db0aced5752c30bfffb
3c378074111a6cc7319c0db0aced5752c30bfffb_0
Q: Does the performance increase using their method? Text: Introduction Slot filling models are a useful method for simple natural language understanding tasks, where information can be extracted from a sentence and used to perform some structured action. For example, dates, departure cities and destinations represent slots to fill in a flight booking task. This information is extracted from natural language queries leveraging typical context associated with each slot type. Researchers have been exploring data-driven approaches to learning models for automatic identification of slot information since the 90's, and significant advances have been made BIBREF0 . Our paper builds on recent work on slot-filling using recurrent neural networks (RNNs) with a focus on the problem of training from minimal annotated data, taking an approach of sharing data from multiple tasks to reduce the amount of data for developing a new task. As candidate tasks, we consider the actions that a user might perform via apps on their phone. Typically, a separate slot-filling model would be trained for each app. For example, one model understands queries about classified ads for cars BIBREF1 and another model handles queries about the weather BIBREF2 . As the number of apps increases, this approach becomes impractical due to the burden of collecting and labeling the training data for each model. In addition, using independent models for each task has high storage costs for mobile devices. Alternatively, a single model can be learned to handle all of the apps. This type of approach is known as multi-task learning and can lead to improved performance on all of the tasks due to information sharing between the different apps BIBREF3 . Multi-task learning in combination with neural networks has been shown to be effective for natural language processing tasks BIBREF4 . When using RNNs for slot filling, almost all of the model parameters can be shared between tasks. In our study, only the relatively small output layer, which consists of slot embeddings, is individual to each app. More sharing means that less training data per app can be used and there will still be enough data to effectively train the network. The multi-task approach has lower data requirements, which leads to a large cost savings and makes this approach scalable to large numbers of applications. The shared representation that we build on leverages recent work on slot filling models that use neural network based approaches. Early neural network based papers propose feedforward BIBREF5 or RNN architectures BIBREF6 , BIBREF7 . The focus shifted to RNN's with long-short term memory cells (LSTMs) BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 after LSTMs were shown to be effective for other tasks BIBREF12 . The most recent papers use variations on LSTM sequence models, including encoder-decoder, external memory, or attention architectures BIBREF13 , BIBREF14 , BIBREF15 . The particular variant that we build on is a bidirectional LSTM, similar to BIBREF16 , BIBREF11 . One highly desirable property of a good slot filling model is to generalize to previously unseen slot values. For instance, we should not expect that the model will see the names of all the cities during training time, especially when only a small amount of training data is used. We address the generalizability issue by incorporating the open vocabulary embeddings from Ling et al. into our model BIBREF17 . These embeddings work by using a character RNN to process a word one letter at a time. This way the model can learn to share parameters between different words that use the same morphemes. For example BBQ restaurants frequently use words like “smokehouse”, “steakhouse”, and “roadhouse” in their names and “Bayside”,“Bayview”, and “Baywood” are all streets in San Francisco. Recognizing these patterns would be helpful in detecting a restaurant or street name slot, respectively. The two main contributions of this work are the multi-task model and the use of the open vocabulary character-based embeddings, which together allow for scalable slot filling models. Our work on multi-task learning in slot filling differs from its previous use in BIBREF18 in that we allow for soft sharing between tasks instead of explicitly matching slots to each other across different tasks. A limitation of explicit slot matching is that two slots that appear to have the same underlying type, such as location-based slots, may actually use the slot information in different ways depending on the overall intent of the task. In our model, the sharing between tasks is done implicitly by the neural network. Our approach to handling words unseen in training data is different from the delexicalization proposed in BIBREF19 in that we do not require the vocabulary items associated with slots and values to be prespecified. It is complementary to work on extending domain coverage BIBREF20 , BIBREF21 . The proposed model is described in more detail in Section "Model" . The approach is assessed on a new data collection based on four apps, described in Section "Data" . The experiments described in Section "Training and Model Configuration Details" investigate how much data is necessary for the $n$ -th app using a multi-task model that leverages the data from the previous $n-1$ apps, with results compared against the single-task model that only utilizes the data from the $n$ -th app. We conclude in Section "Conclusions" with a summary of the key findings and discussion of opportunities for future work. Model Our model has a word embedding layer, followed by a bi-directional LSTM (bi-LSTM), and a softmax output layer. The bi-LSTM allows the model to use information from both the right and left contexts of each word when making predictions. We choose this architecture because similar models have been used in prior work on slot filling and have achieved good results BIBREF16 , BIBREF11 . The LSTM gates are used as defined by Sak et al. including the use of the linear projection layer on the output of the LSTM BIBREF22 . The purpose of the projection layer is to produce a model with fewer parameters without reducing the number of LSTM memory cells. For the multi-task model, the word embeddings and the bi-LSTM parameters are shared across tasks but each task has its own softmax layer. This means that if the multi-task model has half a million parameters, only a couple thousand of them are unique to each task and the other 99.5% are shared between all of the tasks. The slot labels are encoded in BIO format BIBREF23 indicating if a word is the beginning, inside or outside any particular slot. Decoding is done greedily. If a label does not follow the BIO syntax rules, i.e. an inside tag must follow the appropriate begin tag, then it is replaced with the outside label. Evaluation is done using the CoNLL evaluation script BIBREF24 to calculate the F1 score. This is the standard way of evaluating slot-filling models in the literature. In recent work on language modeling, a neural architecture that combined fixed word embeddings with character-based embeddings was found to to be useful for handling previously unseen words BIBREF25 . Based on that result, the embeddings in the open vocabulary model are a concatenation of the character-based embeddings with fixed word embeddings. When an out-of-vocabulary word is encountered, its character-based embedding is concatenated with the embedding for the unknown word token. The character-based embeddings are generated from a two layer bi-LSTM that processes each word one character at a time. The character-based word embedding is produced by concatenating the last states from each of the directional LSTM's in the second layer and passing them through a linear layer for dimensionality reduction. Data Crowd-sourced data was collected simulating common use cases for four different apps: United Airlines, Airbnb, Greyhound bus service and OpenTable. The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant. In order to elicit natural language, crowd workers were instructed to simulate a conversation with a friend planning an activity as opposed to giving a command to the computer. Workers were prompted with a slot type/value pair and asked to form a reply to their friend using that information. The instructions were to not include any other potential slots in the sentence but this instruction was not always followed by the workers. Slot types were chosen to roughly correspond to form fields and UI elements, such as check boxes or dropdown menus, on the respective apps. The amount of data collected per app and the number of slot types is listed in Table 1 . The slot types for each app are described in Table 2 , and an example labeled sentence from each app is given in Table 3 . One thing to notice is that the the number of slot types is relatively small when compared to the popular ATIS dataset that has over one hundred slot types BIBREF0 . In ATIS, separate slot types would be used for names of cities, states, or countries whereas in this data all of those would fall under a single slot for locations. Slot values were pulled from manually created lists of locations, dates and times, restaurants, etc. Values for prompting each rater were sampled from these lists. Workers were instructed to use different re-phrasings of the prompted values, but most people used the prompted value verbatim. Occasionally, workers used an unprompted slot value not in the list. For the word-level LSTM, the data was lower-cased and tokenized using a standard tokenizer. Spelling mistakes were not corrected. All digits were replaced by the '#' character. Words that appear only once in the training data are replaced with an unknown word token. For the character-based word embeddings used in the open vocabulary model, no lower casing or digit replacement is done. Due to the way the OpenTable data was collected some slot values were over-represented leading to over fitting to those particular values. To correct this problem sentences that used the over-represented slot values had their values replaced by sampling from a larger list of potential values. The affected slot types are the ones for cuisine, restaurant names, and locations. This substitution made the OpenTable data more realistic as well as more similar to the other data that was collected. The data we collected for the United Airlines app is an exception in a few ways: we collected four times as much data for this app than the other ones; workers were occasionally prompted with up to four slot type/value pairs; and workers were instructed to give commands to their device instead of simulating a conversation with a friend. For all of the other apps, workers were prompted to use a single slot type per sentence. We argue that having varying amounts of data for different apps is a realistic scenario. Another possible source of data is the Air Travel Information Service (ATIS) data set collected in the early 1990's BIBREF0 . However, this data is sufficiently similar to the United collection, that it is not likely to add sufficient variety to improve the target domains. Further, it suffers from artifacts of data collected at a time with speech recognition systems had much higher error rates. The new data collected for this work fills a need raised in BIBREF26 , which concluded that lack of data was an impediment to progress in slot filling. Experiments The section describes two sets of experiments: the first is designed to test the effectiveness of the multi-task model and the second is designed to test the generalizability of the open vocabulary model. The scenario is that we already have $n-1$ models in place and we wish to discover how much data will be necessary to build a model for an additional application. Training and Model Configuration Details The data is split to use 30% for training with 70% to be used for test data. The reason that a majority of the data is used for testing is that in the second experiment the results are reported separately for sentences containing out of vocabulary tokens and a large amount of data is needed to get a sufficient sample size. Hyperparameter tuning presents a challenge when operating in a low resource scenario. When there is barely enough data to train the model none can be spared for a validation set. We used data from the United app for hyperparameter tuning since it is the largest and assumed that the hyperparameter settings generalized to the other apps. Training is done using stochastic gradient descent with minibatches of 25 sentences. The initial learning rate is 0.3 and is set to decay to 98% of its value every 100 minibatches. For the multi-task model, training proceeds by alternating between each of the tasks when selecting the next minibatch. All the parameters are initialized uniformly in the range [-0.1, 0.1]. Dropout is used for regularization on the word embeddings and on the outputs from each LSTM layer with the dropout probability set to 60% BIBREF27 . For the single-task model, the word embeddings are 60 dimensional and the LSTM is dimension 100 with a 70 dimensional projection layer on the LSTM. For the multi-task model, word embeddings are 200 dimensional, and the LSTM has 250 dimensions with a 170 dimensional projection layer. For the open vocabulary version of the model, the 200-dimensional input is a concatenation of 160-dimensional traditional word embeddings with 40-dimensional character-based word embeddings. The character embedding layer is 15 dimensions, the first LSTM layer is 40 dimensions with a 20 dimensional projection layer, and the second LSTM layer is 130 dimensions. Multi-task Model Experiments We compare a single-task model against the multi-task model for varying amounts of training data. In the multi-task model, the full amount of data is used for $n-1$ apps and the amount of data is allowed to vary only for the $n$ -th application. These experiments use the traditional word embeddings with a closed vocabulary. Since the data for the United app is bigger than the other three apps combined, it is used as an anchor for the multi-task model. The other three apps alternate in the position of the $n$ -th app. The data usage for the $n$ -th app is varied while the other $n-1$ apps in each experiment use the full amount of available training data. The full amount of training data is different for each app. The data used for the $n$ -th app is 200, 400, or 800 sentences or all available training data depending on the experiment. The test set remains fixed for all of the experiments even as part of the training data is discarded to simulate the low resource scenario. In Figure 1 we show the single-task vs. multi-task model performance for each of three different applications. The multi-task model outperforms the single-task model at all data sizes, and the relative performance increases as the size of the training data decreases. When only 200 sentences of training data are used, the performance of the multi-task model is about 60% better than the single-task model for both the Airbnb and Greyhound apps. The relative gain for the OpenTable app is 26%. Because the performance of the multi-task model decays much more slowly as the amount of training data is reduced, the multi-task model can deliver the same performance with a considerable reduction in the amount of labeled data. Open Vocabulary Model Experiments The open vocabulary model experiments test the ability of the model to handle unseen words in test time, which are particularly likely to occur when using a reduced amount of training data. In these experiments the open vocabulary model is compared against the fixed embedding model. The results are reported separately for the sentences that contain out of vocabulary tokens, since these are where the open vocabulary system is expected to have an advantage. Figure 2 gives the OOV rate for each app for varying amounts of training data plotted on a log-log scale. The OOV words tend to be task-specific terminology. For example, the OpenTable task is the only one that has names of restaurants but names of cities are present in all four tasks so they tend to be covered better. The OOV rate dramatically increases when the size of the training data is less than 500 sentences. Since our goal is to operate in the regime of less than 500 sentences per task, handling OOVs is a priority. The multi-task model is used in these experiments. The only difference between the closed vocabulary and open vocabulary systems is that the closed vocabulary system uses the traditional word embeddings and the open vocabulary system uses the traditional word embeddings concatenated with character-based embeddings. Table 4 reports F1 scores on the test set for both the closed and open vocabulary systems. The results differ between the tasks, but none have an overall benefit from the open vocabulary system. Looking at the subset of sentences that contain an OOV token, the open vocabulary system delivers increased performance on the Airbnb and Greyhound tasks. These two are the most difficult apps out of the four and therefore had the most room for improvement. The United app is also all lower case and casing is an important clue for detecting proper nouns that the open vocabulary model takes advantage of. Looking a little deeper, in Figure 3 we show the breakdown in performance across individual slot types. Only those slot types which occur at least one hundred times in the test data are shown in this figure. The slot types that are above the diagonal saw a performance improvement using the open vocabulary model. The opposite is true for those that are below the diagonal. The open vocabulary system appears to do worse on slots that express quantities, dates and times and better on slots with greater slot perplexity (i.e., greater variation in slot values) like ones relating to locations. The three slots where the open vocabulary model gave the biggest gain are the Greyhound LeavingFrom and GoingTo slots along with the Airbnb Amenities slot. The three slots where the open vocabulary model did the worst relative to the closed vocabulary model are the Airbnb Price slot, along with the Greyhound DiscountType and DepartDate slots. The Amenities slot is an example of a slot with higher perplexity (with options related to pets, availability of a gym, parking, fire extinguishers, proximity to attractions), and the DiscountType is one with lower perplexity (three options cover almost all cases). We hypothesize that the reason that the numerical slots are better under the closed vocabulary model is due to their relative simplicity and not an inability of the character embeddings to learn representations for numbers. Conclusions In summary, we find that using a multi-task model with shared embeddings gives a large reduction in the minimum amount of data needed to train a slot-filling model for a new app. This translates into a cost savings for deploying slot filling models for new applications. The combination of the multi-task model with the open vocabulary embeddings increases the generalizability of the model especially when there are OOVs in the sentence. These contributions allow for scalable slot filling models. For future work, there are some improvements that could be made to the model such as the addition of an attentional mechanism to help with long distance dependencies BIBREF15 , use of beam-search to improve decoding, and exploring unsupervised adaptation as in BIBREF19 . Another item for future work is to collect additional tasks to examine the scalability of the multi-task model beyond the four applications that were used in this work. Due to their extra depth, character-based methods usually require more data than word based models BIBREF28 . Since this paper uses limited data, the collection of additional tasks may significantly improve the performance of the open vocabulary model.
The multi-task model outperforms the single-task model at all data sizes, but none have an overall benefit from the open vocabulary system
b464bc48f176a5945e54051e3ffaea9a6ad886d7
b464bc48f176a5945e54051e3ffaea9a6ad886d7_0
Q: What tasks are they experimenting with in this paper? Text: Introduction Slot filling models are a useful method for simple natural language understanding tasks, where information can be extracted from a sentence and used to perform some structured action. For example, dates, departure cities and destinations represent slots to fill in a flight booking task. This information is extracted from natural language queries leveraging typical context associated with each slot type. Researchers have been exploring data-driven approaches to learning models for automatic identification of slot information since the 90's, and significant advances have been made BIBREF0 . Our paper builds on recent work on slot-filling using recurrent neural networks (RNNs) with a focus on the problem of training from minimal annotated data, taking an approach of sharing data from multiple tasks to reduce the amount of data for developing a new task. As candidate tasks, we consider the actions that a user might perform via apps on their phone. Typically, a separate slot-filling model would be trained for each app. For example, one model understands queries about classified ads for cars BIBREF1 and another model handles queries about the weather BIBREF2 . As the number of apps increases, this approach becomes impractical due to the burden of collecting and labeling the training data for each model. In addition, using independent models for each task has high storage costs for mobile devices. Alternatively, a single model can be learned to handle all of the apps. This type of approach is known as multi-task learning and can lead to improved performance on all of the tasks due to information sharing between the different apps BIBREF3 . Multi-task learning in combination with neural networks has been shown to be effective for natural language processing tasks BIBREF4 . When using RNNs for slot filling, almost all of the model parameters can be shared between tasks. In our study, only the relatively small output layer, which consists of slot embeddings, is individual to each app. More sharing means that less training data per app can be used and there will still be enough data to effectively train the network. The multi-task approach has lower data requirements, which leads to a large cost savings and makes this approach scalable to large numbers of applications. The shared representation that we build on leverages recent work on slot filling models that use neural network based approaches. Early neural network based papers propose feedforward BIBREF5 or RNN architectures BIBREF6 , BIBREF7 . The focus shifted to RNN's with long-short term memory cells (LSTMs) BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 after LSTMs were shown to be effective for other tasks BIBREF12 . The most recent papers use variations on LSTM sequence models, including encoder-decoder, external memory, or attention architectures BIBREF13 , BIBREF14 , BIBREF15 . The particular variant that we build on is a bidirectional LSTM, similar to BIBREF16 , BIBREF11 . One highly desirable property of a good slot filling model is to generalize to previously unseen slot values. For instance, we should not expect that the model will see the names of all the cities during training time, especially when only a small amount of training data is used. We address the generalizability issue by incorporating the open vocabulary embeddings from Ling et al. into our model BIBREF17 . These embeddings work by using a character RNN to process a word one letter at a time. This way the model can learn to share parameters between different words that use the same morphemes. For example BBQ restaurants frequently use words like “smokehouse”, “steakhouse”, and “roadhouse” in their names and “Bayside”,“Bayview”, and “Baywood” are all streets in San Francisco. Recognizing these patterns would be helpful in detecting a restaurant or street name slot, respectively. The two main contributions of this work are the multi-task model and the use of the open vocabulary character-based embeddings, which together allow for scalable slot filling models. Our work on multi-task learning in slot filling differs from its previous use in BIBREF18 in that we allow for soft sharing between tasks instead of explicitly matching slots to each other across different tasks. A limitation of explicit slot matching is that two slots that appear to have the same underlying type, such as location-based slots, may actually use the slot information in different ways depending on the overall intent of the task. In our model, the sharing between tasks is done implicitly by the neural network. Our approach to handling words unseen in training data is different from the delexicalization proposed in BIBREF19 in that we do not require the vocabulary items associated with slots and values to be prespecified. It is complementary to work on extending domain coverage BIBREF20 , BIBREF21 . The proposed model is described in more detail in Section "Model" . The approach is assessed on a new data collection based on four apps, described in Section "Data" . The experiments described in Section "Training and Model Configuration Details" investigate how much data is necessary for the $n$ -th app using a multi-task model that leverages the data from the previous $n-1$ apps, with results compared against the single-task model that only utilizes the data from the $n$ -th app. We conclude in Section "Conclusions" with a summary of the key findings and discussion of opportunities for future work. Model Our model has a word embedding layer, followed by a bi-directional LSTM (bi-LSTM), and a softmax output layer. The bi-LSTM allows the model to use information from both the right and left contexts of each word when making predictions. We choose this architecture because similar models have been used in prior work on slot filling and have achieved good results BIBREF16 , BIBREF11 . The LSTM gates are used as defined by Sak et al. including the use of the linear projection layer on the output of the LSTM BIBREF22 . The purpose of the projection layer is to produce a model with fewer parameters without reducing the number of LSTM memory cells. For the multi-task model, the word embeddings and the bi-LSTM parameters are shared across tasks but each task has its own softmax layer. This means that if the multi-task model has half a million parameters, only a couple thousand of them are unique to each task and the other 99.5% are shared between all of the tasks. The slot labels are encoded in BIO format BIBREF23 indicating if a word is the beginning, inside or outside any particular slot. Decoding is done greedily. If a label does not follow the BIO syntax rules, i.e. an inside tag must follow the appropriate begin tag, then it is replaced with the outside label. Evaluation is done using the CoNLL evaluation script BIBREF24 to calculate the F1 score. This is the standard way of evaluating slot-filling models in the literature. In recent work on language modeling, a neural architecture that combined fixed word embeddings with character-based embeddings was found to to be useful for handling previously unseen words BIBREF25 . Based on that result, the embeddings in the open vocabulary model are a concatenation of the character-based embeddings with fixed word embeddings. When an out-of-vocabulary word is encountered, its character-based embedding is concatenated with the embedding for the unknown word token. The character-based embeddings are generated from a two layer bi-LSTM that processes each word one character at a time. The character-based word embedding is produced by concatenating the last states from each of the directional LSTM's in the second layer and passing them through a linear layer for dimensionality reduction. Data Crowd-sourced data was collected simulating common use cases for four different apps: United Airlines, Airbnb, Greyhound bus service and OpenTable. The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant. In order to elicit natural language, crowd workers were instructed to simulate a conversation with a friend planning an activity as opposed to giving a command to the computer. Workers were prompted with a slot type/value pair and asked to form a reply to their friend using that information. The instructions were to not include any other potential slots in the sentence but this instruction was not always followed by the workers. Slot types were chosen to roughly correspond to form fields and UI elements, such as check boxes or dropdown menus, on the respective apps. The amount of data collected per app and the number of slot types is listed in Table 1 . The slot types for each app are described in Table 2 , and an example labeled sentence from each app is given in Table 3 . One thing to notice is that the the number of slot types is relatively small when compared to the popular ATIS dataset that has over one hundred slot types BIBREF0 . In ATIS, separate slot types would be used for names of cities, states, or countries whereas in this data all of those would fall under a single slot for locations. Slot values were pulled from manually created lists of locations, dates and times, restaurants, etc. Values for prompting each rater were sampled from these lists. Workers were instructed to use different re-phrasings of the prompted values, but most people used the prompted value verbatim. Occasionally, workers used an unprompted slot value not in the list. For the word-level LSTM, the data was lower-cased and tokenized using a standard tokenizer. Spelling mistakes were not corrected. All digits were replaced by the '#' character. Words that appear only once in the training data are replaced with an unknown word token. For the character-based word embeddings used in the open vocabulary model, no lower casing or digit replacement is done. Due to the way the OpenTable data was collected some slot values were over-represented leading to over fitting to those particular values. To correct this problem sentences that used the over-represented slot values had their values replaced by sampling from a larger list of potential values. The affected slot types are the ones for cuisine, restaurant names, and locations. This substitution made the OpenTable data more realistic as well as more similar to the other data that was collected. The data we collected for the United Airlines app is an exception in a few ways: we collected four times as much data for this app than the other ones; workers were occasionally prompted with up to four slot type/value pairs; and workers were instructed to give commands to their device instead of simulating a conversation with a friend. For all of the other apps, workers were prompted to use a single slot type per sentence. We argue that having varying amounts of data for different apps is a realistic scenario. Another possible source of data is the Air Travel Information Service (ATIS) data set collected in the early 1990's BIBREF0 . However, this data is sufficiently similar to the United collection, that it is not likely to add sufficient variety to improve the target domains. Further, it suffers from artifacts of data collected at a time with speech recognition systems had much higher error rates. The new data collected for this work fills a need raised in BIBREF26 , which concluded that lack of data was an impediment to progress in slot filling. Experiments The section describes two sets of experiments: the first is designed to test the effectiveness of the multi-task model and the second is designed to test the generalizability of the open vocabulary model. The scenario is that we already have $n-1$ models in place and we wish to discover how much data will be necessary to build a model for an additional application. Training and Model Configuration Details The data is split to use 30% for training with 70% to be used for test data. The reason that a majority of the data is used for testing is that in the second experiment the results are reported separately for sentences containing out of vocabulary tokens and a large amount of data is needed to get a sufficient sample size. Hyperparameter tuning presents a challenge when operating in a low resource scenario. When there is barely enough data to train the model none can be spared for a validation set. We used data from the United app for hyperparameter tuning since it is the largest and assumed that the hyperparameter settings generalized to the other apps. Training is done using stochastic gradient descent with minibatches of 25 sentences. The initial learning rate is 0.3 and is set to decay to 98% of its value every 100 minibatches. For the multi-task model, training proceeds by alternating between each of the tasks when selecting the next minibatch. All the parameters are initialized uniformly in the range [-0.1, 0.1]. Dropout is used for regularization on the word embeddings and on the outputs from each LSTM layer with the dropout probability set to 60% BIBREF27 . For the single-task model, the word embeddings are 60 dimensional and the LSTM is dimension 100 with a 70 dimensional projection layer on the LSTM. For the multi-task model, word embeddings are 200 dimensional, and the LSTM has 250 dimensions with a 170 dimensional projection layer. For the open vocabulary version of the model, the 200-dimensional input is a concatenation of 160-dimensional traditional word embeddings with 40-dimensional character-based word embeddings. The character embedding layer is 15 dimensions, the first LSTM layer is 40 dimensions with a 20 dimensional projection layer, and the second LSTM layer is 130 dimensions. Multi-task Model Experiments We compare a single-task model against the multi-task model for varying amounts of training data. In the multi-task model, the full amount of data is used for $n-1$ apps and the amount of data is allowed to vary only for the $n$ -th application. These experiments use the traditional word embeddings with a closed vocabulary. Since the data for the United app is bigger than the other three apps combined, it is used as an anchor for the multi-task model. The other three apps alternate in the position of the $n$ -th app. The data usage for the $n$ -th app is varied while the other $n-1$ apps in each experiment use the full amount of available training data. The full amount of training data is different for each app. The data used for the $n$ -th app is 200, 400, or 800 sentences or all available training data depending on the experiment. The test set remains fixed for all of the experiments even as part of the training data is discarded to simulate the low resource scenario. In Figure 1 we show the single-task vs. multi-task model performance for each of three different applications. The multi-task model outperforms the single-task model at all data sizes, and the relative performance increases as the size of the training data decreases. When only 200 sentences of training data are used, the performance of the multi-task model is about 60% better than the single-task model for both the Airbnb and Greyhound apps. The relative gain for the OpenTable app is 26%. Because the performance of the multi-task model decays much more slowly as the amount of training data is reduced, the multi-task model can deliver the same performance with a considerable reduction in the amount of labeled data. Open Vocabulary Model Experiments The open vocabulary model experiments test the ability of the model to handle unseen words in test time, which are particularly likely to occur when using a reduced amount of training data. In these experiments the open vocabulary model is compared against the fixed embedding model. The results are reported separately for the sentences that contain out of vocabulary tokens, since these are where the open vocabulary system is expected to have an advantage. Figure 2 gives the OOV rate for each app for varying amounts of training data plotted on a log-log scale. The OOV words tend to be task-specific terminology. For example, the OpenTable task is the only one that has names of restaurants but names of cities are present in all four tasks so they tend to be covered better. The OOV rate dramatically increases when the size of the training data is less than 500 sentences. Since our goal is to operate in the regime of less than 500 sentences per task, handling OOVs is a priority. The multi-task model is used in these experiments. The only difference between the closed vocabulary and open vocabulary systems is that the closed vocabulary system uses the traditional word embeddings and the open vocabulary system uses the traditional word embeddings concatenated with character-based embeddings. Table 4 reports F1 scores on the test set for both the closed and open vocabulary systems. The results differ between the tasks, but none have an overall benefit from the open vocabulary system. Looking at the subset of sentences that contain an OOV token, the open vocabulary system delivers increased performance on the Airbnb and Greyhound tasks. These two are the most difficult apps out of the four and therefore had the most room for improvement. The United app is also all lower case and casing is an important clue for detecting proper nouns that the open vocabulary model takes advantage of. Looking a little deeper, in Figure 3 we show the breakdown in performance across individual slot types. Only those slot types which occur at least one hundred times in the test data are shown in this figure. The slot types that are above the diagonal saw a performance improvement using the open vocabulary model. The opposite is true for those that are below the diagonal. The open vocabulary system appears to do worse on slots that express quantities, dates and times and better on slots with greater slot perplexity (i.e., greater variation in slot values) like ones relating to locations. The three slots where the open vocabulary model gave the biggest gain are the Greyhound LeavingFrom and GoingTo slots along with the Airbnb Amenities slot. The three slots where the open vocabulary model did the worst relative to the closed vocabulary model are the Airbnb Price slot, along with the Greyhound DiscountType and DepartDate slots. The Amenities slot is an example of a slot with higher perplexity (with options related to pets, availability of a gym, parking, fire extinguishers, proximity to attractions), and the DiscountType is one with lower perplexity (three options cover almost all cases). We hypothesize that the reason that the numerical slots are better under the closed vocabulary model is due to their relative simplicity and not an inability of the character embeddings to learn representations for numbers. Conclusions In summary, we find that using a multi-task model with shared embeddings gives a large reduction in the minimum amount of data needed to train a slot-filling model for a new app. This translates into a cost savings for deploying slot filling models for new applications. The combination of the multi-task model with the open vocabulary embeddings increases the generalizability of the model especially when there are OOVs in the sentence. These contributions allow for scalable slot filling models. For future work, there are some improvements that could be made to the model such as the addition of an attentional mechanism to help with long distance dependencies BIBREF15 , use of beam-search to improve decoding, and exploring unsupervised adaptation as in BIBREF19 . Another item for future work is to collect additional tasks to examine the scalability of the multi-task model beyond the four applications that were used in this work. Due to their extra depth, character-based methods usually require more data than word based models BIBREF28 . Since this paper uses limited data, the collection of additional tasks may significantly improve the performance of the open vocabulary model.
Slot filling, we consider the actions that a user might perform via apps on their phone, The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant
3b40799f25dbd98bba5b526e0a1d0d0bb51173e0
3b40799f25dbd98bba5b526e0a1d0d0bb51173e0_0
Q: What is the size of the open vocabulary? Text: Introduction Slot filling models are a useful method for simple natural language understanding tasks, where information can be extracted from a sentence and used to perform some structured action. For example, dates, departure cities and destinations represent slots to fill in a flight booking task. This information is extracted from natural language queries leveraging typical context associated with each slot type. Researchers have been exploring data-driven approaches to learning models for automatic identification of slot information since the 90's, and significant advances have been made BIBREF0 . Our paper builds on recent work on slot-filling using recurrent neural networks (RNNs) with a focus on the problem of training from minimal annotated data, taking an approach of sharing data from multiple tasks to reduce the amount of data for developing a new task. As candidate tasks, we consider the actions that a user might perform via apps on their phone. Typically, a separate slot-filling model would be trained for each app. For example, one model understands queries about classified ads for cars BIBREF1 and another model handles queries about the weather BIBREF2 . As the number of apps increases, this approach becomes impractical due to the burden of collecting and labeling the training data for each model. In addition, using independent models for each task has high storage costs for mobile devices. Alternatively, a single model can be learned to handle all of the apps. This type of approach is known as multi-task learning and can lead to improved performance on all of the tasks due to information sharing between the different apps BIBREF3 . Multi-task learning in combination with neural networks has been shown to be effective for natural language processing tasks BIBREF4 . When using RNNs for slot filling, almost all of the model parameters can be shared between tasks. In our study, only the relatively small output layer, which consists of slot embeddings, is individual to each app. More sharing means that less training data per app can be used and there will still be enough data to effectively train the network. The multi-task approach has lower data requirements, which leads to a large cost savings and makes this approach scalable to large numbers of applications. The shared representation that we build on leverages recent work on slot filling models that use neural network based approaches. Early neural network based papers propose feedforward BIBREF5 or RNN architectures BIBREF6 , BIBREF7 . The focus shifted to RNN's with long-short term memory cells (LSTMs) BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 after LSTMs were shown to be effective for other tasks BIBREF12 . The most recent papers use variations on LSTM sequence models, including encoder-decoder, external memory, or attention architectures BIBREF13 , BIBREF14 , BIBREF15 . The particular variant that we build on is a bidirectional LSTM, similar to BIBREF16 , BIBREF11 . One highly desirable property of a good slot filling model is to generalize to previously unseen slot values. For instance, we should not expect that the model will see the names of all the cities during training time, especially when only a small amount of training data is used. We address the generalizability issue by incorporating the open vocabulary embeddings from Ling et al. into our model BIBREF17 . These embeddings work by using a character RNN to process a word one letter at a time. This way the model can learn to share parameters between different words that use the same morphemes. For example BBQ restaurants frequently use words like “smokehouse”, “steakhouse”, and “roadhouse” in their names and “Bayside”,“Bayview”, and “Baywood” are all streets in San Francisco. Recognizing these patterns would be helpful in detecting a restaurant or street name slot, respectively. The two main contributions of this work are the multi-task model and the use of the open vocabulary character-based embeddings, which together allow for scalable slot filling models. Our work on multi-task learning in slot filling differs from its previous use in BIBREF18 in that we allow for soft sharing between tasks instead of explicitly matching slots to each other across different tasks. A limitation of explicit slot matching is that two slots that appear to have the same underlying type, such as location-based slots, may actually use the slot information in different ways depending on the overall intent of the task. In our model, the sharing between tasks is done implicitly by the neural network. Our approach to handling words unseen in training data is different from the delexicalization proposed in BIBREF19 in that we do not require the vocabulary items associated with slots and values to be prespecified. It is complementary to work on extending domain coverage BIBREF20 , BIBREF21 . The proposed model is described in more detail in Section "Model" . The approach is assessed on a new data collection based on four apps, described in Section "Data" . The experiments described in Section "Training and Model Configuration Details" investigate how much data is necessary for the $n$ -th app using a multi-task model that leverages the data from the previous $n-1$ apps, with results compared against the single-task model that only utilizes the data from the $n$ -th app. We conclude in Section "Conclusions" with a summary of the key findings and discussion of opportunities for future work. Model Our model has a word embedding layer, followed by a bi-directional LSTM (bi-LSTM), and a softmax output layer. The bi-LSTM allows the model to use information from both the right and left contexts of each word when making predictions. We choose this architecture because similar models have been used in prior work on slot filling and have achieved good results BIBREF16 , BIBREF11 . The LSTM gates are used as defined by Sak et al. including the use of the linear projection layer on the output of the LSTM BIBREF22 . The purpose of the projection layer is to produce a model with fewer parameters without reducing the number of LSTM memory cells. For the multi-task model, the word embeddings and the bi-LSTM parameters are shared across tasks but each task has its own softmax layer. This means that if the multi-task model has half a million parameters, only a couple thousand of them are unique to each task and the other 99.5% are shared between all of the tasks. The slot labels are encoded in BIO format BIBREF23 indicating if a word is the beginning, inside or outside any particular slot. Decoding is done greedily. If a label does not follow the BIO syntax rules, i.e. an inside tag must follow the appropriate begin tag, then it is replaced with the outside label. Evaluation is done using the CoNLL evaluation script BIBREF24 to calculate the F1 score. This is the standard way of evaluating slot-filling models in the literature. In recent work on language modeling, a neural architecture that combined fixed word embeddings with character-based embeddings was found to to be useful for handling previously unseen words BIBREF25 . Based on that result, the embeddings in the open vocabulary model are a concatenation of the character-based embeddings with fixed word embeddings. When an out-of-vocabulary word is encountered, its character-based embedding is concatenated with the embedding for the unknown word token. The character-based embeddings are generated from a two layer bi-LSTM that processes each word one character at a time. The character-based word embedding is produced by concatenating the last states from each of the directional LSTM's in the second layer and passing them through a linear layer for dimensionality reduction. Data Crowd-sourced data was collected simulating common use cases for four different apps: United Airlines, Airbnb, Greyhound bus service and OpenTable. The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant. In order to elicit natural language, crowd workers were instructed to simulate a conversation with a friend planning an activity as opposed to giving a command to the computer. Workers were prompted with a slot type/value pair and asked to form a reply to their friend using that information. The instructions were to not include any other potential slots in the sentence but this instruction was not always followed by the workers. Slot types were chosen to roughly correspond to form fields and UI elements, such as check boxes or dropdown menus, on the respective apps. The amount of data collected per app and the number of slot types is listed in Table 1 . The slot types for each app are described in Table 2 , and an example labeled sentence from each app is given in Table 3 . One thing to notice is that the the number of slot types is relatively small when compared to the popular ATIS dataset that has over one hundred slot types BIBREF0 . In ATIS, separate slot types would be used for names of cities, states, or countries whereas in this data all of those would fall under a single slot for locations. Slot values were pulled from manually created lists of locations, dates and times, restaurants, etc. Values for prompting each rater were sampled from these lists. Workers were instructed to use different re-phrasings of the prompted values, but most people used the prompted value verbatim. Occasionally, workers used an unprompted slot value not in the list. For the word-level LSTM, the data was lower-cased and tokenized using a standard tokenizer. Spelling mistakes were not corrected. All digits were replaced by the '#' character. Words that appear only once in the training data are replaced with an unknown word token. For the character-based word embeddings used in the open vocabulary model, no lower casing or digit replacement is done. Due to the way the OpenTable data was collected some slot values were over-represented leading to over fitting to those particular values. To correct this problem sentences that used the over-represented slot values had their values replaced by sampling from a larger list of potential values. The affected slot types are the ones for cuisine, restaurant names, and locations. This substitution made the OpenTable data more realistic as well as more similar to the other data that was collected. The data we collected for the United Airlines app is an exception in a few ways: we collected four times as much data for this app than the other ones; workers were occasionally prompted with up to four slot type/value pairs; and workers were instructed to give commands to their device instead of simulating a conversation with a friend. For all of the other apps, workers were prompted to use a single slot type per sentence. We argue that having varying amounts of data for different apps is a realistic scenario. Another possible source of data is the Air Travel Information Service (ATIS) data set collected in the early 1990's BIBREF0 . However, this data is sufficiently similar to the United collection, that it is not likely to add sufficient variety to improve the target domains. Further, it suffers from artifacts of data collected at a time with speech recognition systems had much higher error rates. The new data collected for this work fills a need raised in BIBREF26 , which concluded that lack of data was an impediment to progress in slot filling. Experiments The section describes two sets of experiments: the first is designed to test the effectiveness of the multi-task model and the second is designed to test the generalizability of the open vocabulary model. The scenario is that we already have $n-1$ models in place and we wish to discover how much data will be necessary to build a model for an additional application. Training and Model Configuration Details The data is split to use 30% for training with 70% to be used for test data. The reason that a majority of the data is used for testing is that in the second experiment the results are reported separately for sentences containing out of vocabulary tokens and a large amount of data is needed to get a sufficient sample size. Hyperparameter tuning presents a challenge when operating in a low resource scenario. When there is barely enough data to train the model none can be spared for a validation set. We used data from the United app for hyperparameter tuning since it is the largest and assumed that the hyperparameter settings generalized to the other apps. Training is done using stochastic gradient descent with minibatches of 25 sentences. The initial learning rate is 0.3 and is set to decay to 98% of its value every 100 minibatches. For the multi-task model, training proceeds by alternating between each of the tasks when selecting the next minibatch. All the parameters are initialized uniformly in the range [-0.1, 0.1]. Dropout is used for regularization on the word embeddings and on the outputs from each LSTM layer with the dropout probability set to 60% BIBREF27 . For the single-task model, the word embeddings are 60 dimensional and the LSTM is dimension 100 with a 70 dimensional projection layer on the LSTM. For the multi-task model, word embeddings are 200 dimensional, and the LSTM has 250 dimensions with a 170 dimensional projection layer. For the open vocabulary version of the model, the 200-dimensional input is a concatenation of 160-dimensional traditional word embeddings with 40-dimensional character-based word embeddings. The character embedding layer is 15 dimensions, the first LSTM layer is 40 dimensions with a 20 dimensional projection layer, and the second LSTM layer is 130 dimensions. Multi-task Model Experiments We compare a single-task model against the multi-task model for varying amounts of training data. In the multi-task model, the full amount of data is used for $n-1$ apps and the amount of data is allowed to vary only for the $n$ -th application. These experiments use the traditional word embeddings with a closed vocabulary. Since the data for the United app is bigger than the other three apps combined, it is used as an anchor for the multi-task model. The other three apps alternate in the position of the $n$ -th app. The data usage for the $n$ -th app is varied while the other $n-1$ apps in each experiment use the full amount of available training data. The full amount of training data is different for each app. The data used for the $n$ -th app is 200, 400, or 800 sentences or all available training data depending on the experiment. The test set remains fixed for all of the experiments even as part of the training data is discarded to simulate the low resource scenario. In Figure 1 we show the single-task vs. multi-task model performance for each of three different applications. The multi-task model outperforms the single-task model at all data sizes, and the relative performance increases as the size of the training data decreases. When only 200 sentences of training data are used, the performance of the multi-task model is about 60% better than the single-task model for both the Airbnb and Greyhound apps. The relative gain for the OpenTable app is 26%. Because the performance of the multi-task model decays much more slowly as the amount of training data is reduced, the multi-task model can deliver the same performance with a considerable reduction in the amount of labeled data. Open Vocabulary Model Experiments The open vocabulary model experiments test the ability of the model to handle unseen words in test time, which are particularly likely to occur when using a reduced amount of training data. In these experiments the open vocabulary model is compared against the fixed embedding model. The results are reported separately for the sentences that contain out of vocabulary tokens, since these are where the open vocabulary system is expected to have an advantage. Figure 2 gives the OOV rate for each app for varying amounts of training data plotted on a log-log scale. The OOV words tend to be task-specific terminology. For example, the OpenTable task is the only one that has names of restaurants but names of cities are present in all four tasks so they tend to be covered better. The OOV rate dramatically increases when the size of the training data is less than 500 sentences. Since our goal is to operate in the regime of less than 500 sentences per task, handling OOVs is a priority. The multi-task model is used in these experiments. The only difference between the closed vocabulary and open vocabulary systems is that the closed vocabulary system uses the traditional word embeddings and the open vocabulary system uses the traditional word embeddings concatenated with character-based embeddings. Table 4 reports F1 scores on the test set for both the closed and open vocabulary systems. The results differ between the tasks, but none have an overall benefit from the open vocabulary system. Looking at the subset of sentences that contain an OOV token, the open vocabulary system delivers increased performance on the Airbnb and Greyhound tasks. These two are the most difficult apps out of the four and therefore had the most room for improvement. The United app is also all lower case and casing is an important clue for detecting proper nouns that the open vocabulary model takes advantage of. Looking a little deeper, in Figure 3 we show the breakdown in performance across individual slot types. Only those slot types which occur at least one hundred times in the test data are shown in this figure. The slot types that are above the diagonal saw a performance improvement using the open vocabulary model. The opposite is true for those that are below the diagonal. The open vocabulary system appears to do worse on slots that express quantities, dates and times and better on slots with greater slot perplexity (i.e., greater variation in slot values) like ones relating to locations. The three slots where the open vocabulary model gave the biggest gain are the Greyhound LeavingFrom and GoingTo slots along with the Airbnb Amenities slot. The three slots where the open vocabulary model did the worst relative to the closed vocabulary model are the Airbnb Price slot, along with the Greyhound DiscountType and DepartDate slots. The Amenities slot is an example of a slot with higher perplexity (with options related to pets, availability of a gym, parking, fire extinguishers, proximity to attractions), and the DiscountType is one with lower perplexity (three options cover almost all cases). We hypothesize that the reason that the numerical slots are better under the closed vocabulary model is due to their relative simplicity and not an inability of the character embeddings to learn representations for numbers. Conclusions In summary, we find that using a multi-task model with shared embeddings gives a large reduction in the minimum amount of data needed to train a slot-filling model for a new app. This translates into a cost savings for deploying slot filling models for new applications. The combination of the multi-task model with the open vocabulary embeddings increases the generalizability of the model especially when there are OOVs in the sentence. These contributions allow for scalable slot filling models. For future work, there are some improvements that could be made to the model such as the addition of an attentional mechanism to help with long distance dependencies BIBREF15 , use of beam-search to improve decoding, and exploring unsupervised adaptation as in BIBREF19 . Another item for future work is to collect additional tasks to examine the scalability of the multi-task model beyond the four applications that were used in this work. Due to their extra depth, character-based methods usually require more data than word based models BIBREF28 . Since this paper uses limited data, the collection of additional tasks may significantly improve the performance of the open vocabulary model.
Unanswerable
3c16d4cf5dc23223980d9c0f924cb9e4e6943f13
3c16d4cf5dc23223980d9c0f924cb9e4e6943f13_0
Q: How do they select answer candidates for their QA task? Text: Introduction Pre-trained language representation models, including feature-based methods BIBREF0 , BIBREF1 and fine-tuning methods BIBREF2 , BIBREF3 , BIBREF4 , can capture rich language information from text and then benefit many NLP tasks. Bidirectional Encoder Representations from Transformers (BERT) BIBREF4 , as one of the most recently developed models, has produced the state-of-the-art results by simple fine-tuning on various NLP tasks, including named entity recognition (NER) BIBREF5 , text classification BIBREF6 , natural language inference (NLI) BIBREF7 , question answering (QA) BIBREF8 , BIBREF9 , and has achieved human-level performances on several datasets BIBREF8 , BIBREF9 . However, commonsense reasoning is still a challenging task for modern machine learning methods. For example, recently BIBREF10 proposed a commonsense-related task, CommonsenseQA, and showed that the BERT model accuracy remains dozens of points lower than human accuracy on the questions about commonsense knowledge. Some examples from CommonsenseQA are shown in Table 1 part A. As can be seen from the examples, although it is easy for humans to answer the questions based on their knowledge about the world, it is a great challenge for machines when there is limited training data. We hypothesize that exploiting knowledge graphs for commonsense in QA modeling can help model choose correct answers. For example, as shown in the part B of Table 1 , some triples from ConceptNet BIBREF11 are quite related to the questions above. Exploiting these triples in the QA modeling may benefit the QA models to make a correct decision. In this paper, we propose a pre-training approach that can leverage commmonsense knowledge graphs, such as ConceptNet BIBREF11 , to improve the commonsense reasoning capability of language representation models, such as BERT. And at the same time, the proposed approach targets maintaining comparable performances on other NLP tasks with the original BERT models. It is challenging to incorporate the commonsense knowledge into language representation models since the commonsense knowledge is represented as a structured format, such as (concept $_1$ , relation, concept $_2$ ) in ConceptNet, which is inconsistent with the data used for pre-training language representation models. For example, BERT is pre-trained on the BooksCorpus and English Wikipedia that are composed of unstructured natural language sentences. To tackle the challenge mentioned above, inspired by the distant supervision approach BIBREF12 , we propose the “align, mask and select" (AMS) method that can align the commonsense knowledge graphs with a large text corpus to construct a dataset consisting of sentences with labeled concepts. Different from the pre-training tasks for BERT, the masked language model (MLM) and next sentence prediction (NSP) tasks, we use the generated dataset in a multi-choice question answering task. We then pre-train the BERT model on this dataset with the multi-choice question answering task and fine-tune it on various commonsense-related tasks, such as CommonsenseQA BIBREF10 and Winograd Schema Challenge (WSC) BIBREF13 , and achieve significant improvements. We also fine-tune and evaluate the pre-trained models on other NLP tasks, such as sentence classification and NLI tasks, such as GLUE BIBREF6 , and achieve comparable performance with the original BERT models. In summary, the contributions of this paper are threefold. First, we propose a pre-training approach for incorporating commonsense knowledge into language representation models for improving the commonsense reasoning capabilities of these models. Second, We propose an “align, mask and select" (AMS) method, inspired by the distant supervision approaches, to automatically construct a multi-choice question answering dataset. Third, Experiments demonstrate that the pre-trained model from the proposed approach with fine-tuning achieves significant performance improvements on several commonsense-related tasks, such as CommonsenseQA BIBREF10 and Winograd Schema Challenge BIBREF13 , and still maintains comparable performances on several sentence classification and NLI tasks in GLUE BIBREF6 . Language Representation Model Language representation models have demonstrated their effectiveness for improving many NLP tasks. These approaches can be categorized into feature-based approaches and fine-tuning approaches. The early Word2Vec BIBREF14 and Glove models BIBREF0 focused on feature-based approaches to transform words into distributed representations. However, these methods suffered from the insufficiency for word disambiguation. BIBREF15 further proposed Embeddings from Language Models (ELMo) that derive context-aware word vectors from a bidirectional LSTM, which is trained with a coupled language model (LM) objective on a large text corpus. The fine-tuning approaches are different from the above-mentioned feature-based language approaches which only use the pre-trained language representations as input features. BIBREF2 pre-trained sentence encoders from unlabeled text and fine-tuned for a supervised downstream task. BIBREF3 proposed a generative pre-trained Transformer BIBREF16 (GPT) to learn language representations. BIBREF4 proposed a deep bidirectional model with multi-layer Transformers (BERT), which achieved the state-of-the-art performance for a wide variety of NLP tasks. The advantage of these approaches is that few parameters need to be learned from scratch. Though both feature-based and fine-tuning language representation models have achieved great success, they did not incorporate the commonsense knowledge. In this paper, we focus on incorporate commonsense knowledge into pre-training of language representation models. Commonsense Reasoning Commonsense reasoning is a challenging task for modern machine learning methods. As demonstrated in recent work BIBREF17 , incorporating commonsense knowledge into question answering models in a model-integration fashion helped improve commonsense reasoning ability. Instead of ensembling two independent models as in BIBREF17 , an alternative direction is to directly incorporate commonsense knowledge into an unified language representation model. BIBREF18 proposed to directly pre-training BERT on commonsense knowledge triples. For any triple (concept $_1$ , relation, concept $_2$ ), they took the concatenation of concept $_1$ and relation as the question and concept $_2$ as the correct answer. Distractors were formed by randomly picking words or phrases in the ConceptNet. In this work, we also investigate directly incorporating commonsense knowledge into an unified language representation model. However, we hypothesize that the language representations learned in BIBREF18 may be tampered since the inputs to the model constructed this way are not natural language sentences. To address this issue, we propose a pre-training approach for incorporating commonsense knowledge that includes a method to construct large-scale, natural language sentences. BIBREF19 collected the Common Sense Explanations (CoS-E) dataset using Amazon Mechanical Turk and applied a Commonsense Auto-Generated Explanations (CAGE) framework to language representation models, such as GPT and BERT. However, collecting this dataset used a large amount of human efforts. In contrast, in this paper, we propose an “align, mask and select" (AMS) method, inspired by the distant supervision approaches, to automatically construct a multi-choice question answering dataset. Distant Supervision The distant supervision approach was originally proposed for generating training data for the relation classification task. The distant supervision approach BIBREF12 assumes that if two entities/concepts participate in a relation, all sentences that mention these two entities/concepts express that relation. Note that it is inevitable that there exists noise in the data labeled by distant supervision BIBREF20 . In this paper, instead of employing the relation labels labeled by distant supervision, we focus on the aligned entities/concepts. We propose the AMS method to construct a multi-choice QA dataset that align sentences with commonsense knowledge triples, mask the aligned words (entities/concepts) in sentences and treat the masked sentences as questions, and select several entities/concepts from knowledge graphs as candidate choices. Commonsense Knowledge Base This section describes the commonsense knowledge base investigated in our experiments. We use the ConceptNet BIBREF11 , one of the most widely used commonsense knowledge bases. ConceptNet is a semantic network that represents the large sets of words and phrases and the commonsense relationships between them. It contains over 21 million edges and over 8 million nodes. Its English vocabulary contains approximately 1,500,000 nodes, and for 83 languages, it contains at least 10,000 nodes for each of them, respectively. ConceptNet contains a core of 36 relations. Each instance in ConceptNet can be generally represented as a triple $r_i$ = (concept $_1$ , relation, concept $_2$ ), indicating relation between the two concepts concept $_1$ and concept $_2$ . For example, the triple (semicarbazide, IsA, chemical compound) means that “semicarbazide is a kind of chemical compounds"; the triple (cooking dinner, Causes, cooked food) means that “the effect of cooking dinner is cooked food", etc. Constructing Pre-training Dataset In this section, we describe the details of constructing the commonsense-related multi-choice question answering dataset. Firstly, we filter the triples in ConceptNet with the following steps: (1) Filter triples in which one of the concepts is not English words. (2) Filter triples with the general relations “RelatedTo" and “IsA", which hold a large proportion in ConceptNet. (3) Filter triples in which one of the concepts has more than four words or the edit distance between the two concepts is less than four. After filtering, we obtain 606,564 triples. Each training sample is generated by three steps: align, mask and select, which we call as AMS method. Each sample in the dataset consists of a question and several candidate answers, which has the same form as the CommonsenseQA dataset. An example of constructing one training sample by masking concept $_2$ is shown in Table 2 . Firstly, we align each triple (concept $_1$ , relation, concept $_2$ ) from ConceptNet to the English Wikipedia dataset to extract the sentences with their concepts labeled. Secondly, we mask the concept $_1$ /concept $_2$ in one sentence with a special token [QW] and treat this sentence as a question, where QW is a replacement word of the question words “what", “where", etc. And the masked concept $_1$ /concept $_2$ is the correct answer for this question. Thirdly, for generating the distractors, BIBREF18 proposed a method to form distractors by randomly picking words or phrases in ConceptNet. In this paper, in order to generate more confusing distractors than the random selection approach, we request those distractors and the correct answer share the same concept $_2$ or concept $_1$ and the relation. That is to say, we search ( $\ast $ , relation, concept $_2$ ) and (concept $_2$0 , relation, $_2$1 ) in ConceptNet to select the distractors instead of random selection, where $_2$2 is a wildcard character that can match any word or phrase. For each question, we reserve four distractors and one correct answer. If there are less than four matched distractors, we discard this question instead of complementing it with random selection. If there are more than four distractors, we randomly select four distractors from them. After applying the AMS method, we create 16,324,846 multi-choice question answering samples. Pre-training BERT_CS We investigate a multi-choice question-answering task for pre-training the English BERT base and BERT large models released by Google on our constructed dataset. The resulting models are denoted BERT_CS $_{base}$ and BERT_CS $_{large}$ , respectively. We then investigate the performance of fine-tuning the BERT_CS models on several NLP tasks, including commonsense-related tasks and common NLP tasks, presented in Section "Experiments" . To reduce the large cost of training BERT_CS models from scratch, we initialize the BERT_CS models (for both BERT $_{base}$ and BERT $_{large}$ models) with the parameter weights released by Google. We concatenate the question with each answer to construct a standard input sequence for BERT_CS (i.e., “[CLS] the largest [QW] by ... ? [SEP] city [SEP]”, where [CLS] and [SEP] are two special tokens), and the hidden representations over the [CLS] token are run through a softmax layer to create the predictions. The objective function is defined as follows: $$L = - {\rm logp}(c_i|s),$$ (Eq. 10) $${\rm p}(c_i|s) = \frac{{\rm exp}(\mathbf {w}^{T}\mathbf {c}_{i})}{\sum _{k=1}^{N}{\rm exp}(\mathbf {w}^{T}\mathbf {c}_{k})},$$ (Eq. 11) where $c_i$ is the correct answer, $\mathbf {w}$ are the parameters in the softmax layer, N is the total number of all candidates, and $\mathbf {c}_i$ is the vector representation of the special token [CLS]. We pre-train BERT_CS models with the batch size 160, the initial learning rate $2e^{-5}$ and the max sequence length 128 for 1 epoch. The pre-training is conducted on 16 NVIDIA V100 GPU cards with 32G memory for about 3 days for the BERT_CS $_{large}$ model and 1 day for the BERT_CS $_{base}$ model. Experiments In this section, we investigate the performance of fine-tuning the BERT_CS models on several NLP tasks. Note that when fine tuning on multi-choice QA tasks, e.g., CommonsenseQA and Winograd Schema Challenge (see section 5.3), we fine-tune all parameters in BERT_CS, including the last softmax layer from the token [CLS]; whereas, for other tasks, we randomly initialize the classifier layer and train it from scratch. Additionally, as described in BIBREF4 , fine-tuning on BERT sometimes is observed to be unstable on small datasets, so we run experiments with 5 different random seeds and select the best model based on the development set for all of the fine-tuning experiments in this section. CommonsenseQA In this subsection, we conduct experiments on a commonsense-related multi-choice question answering benchmark, the CommonsenseQA dataset BIBREF10 . The CommonsenseQA dataset consists of 12,247 questions with one correct answer and four distractor answers. This dataset consists of two splits – the question token split and the random split. Our experiments are conducted on the more challenging random split, which is the main evaluation split according to BIBREF10 . The statistics of the CommonsenseQA dataset are shown in Table 3 . Same as the pre-training stage, the input data for fine-tuning the BERT_CS models is formed by concatenating each question-answer pair as a sequence. The hidden representations over the [CLS] token are run through a softmax layer to create the predictions. The objective function is the same as Equations 10 and 11 . We fine-tune the BERT_CS models on CommonsenseQA for 2 epochs with a learning rate of 1e-5 and a batch size of 16. Table 4 shows the accuracies on the CommonsenseQA test set from the baseline BERT models released by Google, the previous state-of-the-art model CoS-E BIBREF19 , and our BERT_CS models. Note that CoS-E model requires a large amount of human effort to collect the Common Sense Explanations (CoS-E) dataset. In comparison, we construct our multi-choice question-answering dataset automatically. The BERT_CS models significantly outperform the baseline BERT model counterparts. BERT_CS $_{large}$ achieves a 5.5% absolute improvement on the CommonsenseQA test set over the baseline BERT $_{large}$ model and a 4% absolute improvement over the previous SOTA CoS-E model. Winograd Schema Challenge The Winograd Schema Challenge (WSC) BIBREF13 is introduced for testing AI agents for commonsense knowledge. The WSC consists of 273 instances of the pronoun disambiguation problem (PDP). For example, for sentence “The delivery truck zoomed by the school bus because it was going so fast.” and a corresponding question “What does the word it refers to?”, the machine is expected to answer “delivery truck” instead of “school bus”. In this task, we follow BIBREF22 and employ the WSCR dataset BIBREF23 as the extra training data. The WSCR dataset is split into a training set of 1322 examples and a test set of 564 examples. We use these data for fine-tuning and validating BERT_CS models, respectively, and test the fine-tuned BERT_CS models on the WSC dataset. We transform the pronoun disambiguation problem into a multi-choice question answering problem. We mask the pronoun word with a special token [QW] to construct a question, and put the two candidate paragraphs as candidate answers. The remaining procedures are the same as QA tasks. We use the same loss function as BIBREF22 , that is, if c $_1$ is correct and c $_2$ is not, the loss is $$\begin{aligned} L = &- {\rm logp}(c_1|s) + \\ &\alpha \cdot max(0, {\rm logp}(c_2|s)-{\rm logp}(c_1|s)+\beta ), \end{aligned}$$ (Eq. 16) where $p(c_1|s)$ follows Equation 11 with $N=2$ , $\alpha $ and $\beta $ are two hyper-parameters. Similar to BIBREF22 , we search $\alpha \in \lbrace 2.5,5,10,20\rbrace $ and $\beta \in \lbrace 0.05,0.1,0.2,0.4\rbrace $ by comparing the accuracy on the WSCR test set (i.e., the development set for the WSC data set). We set the batch size 16 and the learning rate $1e^{-5}$ . We evaluate our models on the WSC dataset, as well as the various partitions of the WSC dataset, as described in BIBREF24 . We also evaluate the fine-tuned BERT_CS model (without using the WNLI training data for further fine-tuning) on the WNLI test set, one of the GLUE tasks. We first transform the examples in WNLI from the premise-hypothesis format into the pronoun disambiguation problem format and then transform it into the multi-choice QA format BIBREF22 . The results on the WSC dataset and its various partitions and the WNLI test set are shown in Table 5 . Note that the results for BIBREF21 are fine-tuned on the whole WSCR dataset, including the training and test sets. Results for LM ensemble BIBREF25 and Knowledge Hunter BIBREF26 are taken from BIBREF24 . Results for “BERT $_{large}$ + MTP" is taken from BIBREF22 as the baseline of applying BERT to the WSC task. As can be seen from Table 5 , the “BERT $_{large}$ + MCQA" achieves better performance than “BERT $_{large}$ + MTP" on four of the seven evaluation criteria and achieves significant improvement on the assoc. and consist. partitions, which demonstrates that MCQA is a better pre-processing method than MTP for the WSC task. Also, the “BERT_CS $_{large}$ + MCQA" achieves the best performance on all of the evaluation criteria but consist., and achieves a 3.3% absolute improvement on the WSC dataset over the previous SOTA results from BIBREF22 . GLUE The General Language Understanding Evaluation (GLUE) benchmark BIBREF6 is a collection of diverse natural language understanding tasks, including MNLI, QQP, QNLI, SST-2, CoLA, STS-B, MRPC, of which CoLA and SST-2 are single-sentence tasks, MRPC, STS-B and QQP are similarity and paraphrase tasks, and MNLI, QNLI, RTE and WNLI are natural language inference tasks. To investigate whether our multi-choice QA based pre-training approach degenerates the performance on common sentence classification tasks, we evaluate the BERT_CS $_{base}$ and BERT_CS $_{large}$ models on 8 GLUE datasets and compare the performances with those from the baseline BERT models. Following BIBREF4 , we use the batch size 32 and fine-tune for 3 epochs for all GLUE tasks, and select the fine-tuning learning rate (among 1e-5, 2e-5, and 3e-5) based on the performance on the development set. Results are presented in Table 6 . We observe that the BERT_CS $_{large}$ model achieves comparable performance with the BERT $_{large}$ model and the BERT_CS $_{base}$ model achieves slightly better performance than the BERT $_{base}$ model. We hypothesize that the commonsense knowledge may not be required for GLUE tasks. On the other hand, these results demonstrate that our proposed multi-choice QA pre-training task does not degrade the sentence representation capabilities of BERT models. Pre-training Strategy In this subsection, we conduct several comparison experiments using different data and different pre-training tasks on the BERT $_{base}$ model. For simplicity, we discard the subscript $base$ in this subsection. The first set of experiments is to compare the efficacy of our data creation approach versus the data creation approach in BIBREF18 . First, same as BIBREF18 , we collect 606,564 triples from ConceptNet, and construct 1,213,128 questions, each with a correct answer and four distractors. This dataset is denoted the TRIPLES dataset. We pre-train BERT models on the TRIPLES dataset with the same hyper-parameters as the BERT_CS models and the resulting model is denoted BERT_triple. We also create several model counterparts based on our constructed dataset: Distractors are formed by randomly picking concept $_1$ /concept $_2$ in ConceptNet instead of those sharing the same concept $_2$ /concept $_1$ and the relation with the correct answers. We denote the resulting model from this dataset BERT_CS_random. Instead of pre-training BERT with a multi-choice QA task that chooses the correct answer from several candidate answers, we mask concept $_1$ and concept $_2$ and pre-train BERT with a masked language model (MLM) task. We denote the resulting model from this pre-training task BERT_MLM. We randomly mask 15% WordPiece tokens BIBREF27 of the question as in BIBREF4 and then conduct both multi-choice QA task and MLM task simultaneously. The resulting model is denoted BERT_CS_MLM. All these BERT models are fine-tuned on the CommonsenseQA dataset with the same hyper-parameters as described in Section "CommonsenseQA" and the results are shown in Table 7 . We observe the following from Table 7 . Comparing model 1 and model 2, we find that pre-training on ConceptNet benefits the CommonsenseQA task even with the triples as input instead of sentences. Further comparing model 2 and model 6, we find that constructing sentences as input for pre-training BERT performs better on the CommonsenseQA task than using triples for pre-training BERT. We also conduct more detailed comparisons between fine-tuning model 1 and model 2 on GLUE tasks. The results are shown in Table 6 . BERT_triple $_{base}$ yields much worse results than BERT $_{base}$ and BERT_CS $_{base}$ , which demonstrates that pre-training directly on triples may hurt the sentence representation capabilities of BERT. Comparing model 3 and model 6, we find that pre-training BERT benefits from a more difficult dataset. In our selection method, all candidate answers share the same (concept $_1$ , relation) or (relation, concept $_2$ ), that is, these candidates have close meanings. These more confusing candidates force BERT_CS to distinguish synonym meanings, resulting in a more powerful BERT_CS model. Comparing model 5 and model 6, we find that the multi-choice QA task works better than the masked LM task as the pre-training task for the target multi-choice QA task. We argue that, for the masked LM task, BERT_CS is required to predict each masked wordpieces (in concepts) independently and for the multi-choice QA task, BERT is required to model the whole candidate phrases. In this way, BERT is able to model the whole concepts instead of paying much attention to the single wordpieces in the sentences. Comparing model 4 and model 6, we observe that adding the masked LM task may hurt the performance of BERT_CS. This is probably because the masked words in questions may have a negative influence on the multi-choice QA task. Finally, our proposed model BERT_CS achieves the best performance on the CommonsenseQA development set among these model counterparts. Performance Curve In this subsection, we plot the performance curve on CommonsenseQA development set from BERT_CS over the pre-training steps. For every 10,000 training steps, we save the model as the initial model for fine-tuning. For every of these models, we run experiments for 10 times repeatedly with random restarts, that is, we use the same pre-trained checkpoint but perform different fine-tuning data shuffling. Due to the unstability of fine-tuning BERT BIBREF4 , we remove the results that are significantly lower than the mean. In our experiments, we remove the accuracy lower than 0.57 for BERT_CS $_{base}$ and 0.60 for BERT_CS $_{large}$ . We plot the mean and standard deviation values in Figure 1 . We observe that the performance of BERT_CS $_{base}$ converges around 50,000 training steps and BERT_CS $_{large}$ converges around the end of the pre-training stage or may not have converged, which demonstrates that the BERT_CS $_{large}$ is more powerful at incorporating commonsense knowledge. We also compare with pre-training BERT_CS models for 2 epochs. However, our model produces worse performance probably due to over-fitting. Pre-training on a larger corpus (with more QA samples) may benefit the BERT_CS models and we leave this to the future work. Error Analysis Table 8 shows several cases from the Winograd Schema Challenge dataset. Questions 1 and 2 only differ in the words “compassionate" and “cruel". Our model BERT_CS $_{large}$ chooses correct answers for both questions while BERT $_{large}$ chooses the same choice “Bill" for both questions. We speculate that BERT $_{large}$ tends to choosing the closer candidates. We split WSC test set into two parts CLOSE and FAR according as the correct candidate is closer or farther to the pronoun word in the sentence than another candidate. As shown in Table 9 , our model BERT_CS $_{large}$ achieves the same performance on CLOSE set and better performance on FAR set than BERT $_{large}$ . That's to say, BERT_CS $_{large}$ is more robust to the position of the words and focuses more on the semantic of the sentence. Questions 3 and 4 only differ in the words “large" and “small". However, neither BERT_CS $_{large}$ nor BERT $_{large}$ chooses the correct answers. We hypothesize that since “suitcase is large" and “trophy is small" are probably quite frequent for language models, both BERT $_{large}$ and BERT_CS $_{large}$ models make mistakes. In future work, we will investigate other approaches for overcoming the sensitivity of language models and improving commonsense reasoning. Conclusion In this paper, we develop a pre-training approach for incorporating commonsense knowledge into language representation models such as BERT. We construct a commonsense-related multi-choice question answering dataset for pre-training BERT. The dataset is created automatically by our proposed “align, mask, and select" (AMS) method. Experimental results demonstrate that pre-training models using the proposed approach followed by fine-tuning achieves significant improvements on various commonsense-related tasks, such as CommonsenseQA and Winograd Schema Challenge, while maintaining comparable performance on other NLP tasks, such as sentence classification and natural language inference (NLI) tasks, compared to the original BERT models. In future work, we will incorporate the relationship information between two concepts into language representation models. We will also explore other structured knowledge graphs, such as Freebase, to incorporate entity information into language representation models. We also plan to incorporate commonsense knowledge information into other language representation models such as XLNet BIBREF28 . Acknowledgments The authors would like to thank Lingling Jin, Pengfei Fan, Xiaowei Lu for supporting 16 NVIDIA V100 GPU cards.
AMS method.
4c822bbb06141433d04bbc472f08c48bc8378865
4c822bbb06141433d04bbc472f08c48bc8378865_0
Q: How do they extract causality from text? Text: Introduction Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect. Causal inference is a crucial way that humans comprehend the world, and it has been a major focus of philosophy, statistics, mathematics, psychology, and the cognitive sciences. Philosophers such as Hume and Kant have long argued whether causality is a human-centric illusion or the discovery of a priori truth BIBREF10 , BIBREF11 . Causal inference in science is incredibly important, and researchers have developed statistical measures such as Granger causality BIBREF12 , mathematical and probabilistic frameworks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , and text mining procedures BIBREF17 , BIBREF18 , BIBREF19 to better infer causal influence from data. In the cognitive sciences, the famous perception experiments of Michotte et al. led to a long line of research exploring the cognitive biases that humans possess when attempting to link cause and effect BIBREF20 , BIBREF21 , BIBREF22 . How humans understand and communicate cause and effect relationships is complicated, and is influenced by language structure BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 and sentiment or valence BIBREF27 . A key finding is that the perceived emphasis or causal weight changes between the agent (the grammatical construct responsible for a cause) and the patient (the construct effected by the cause) depending on the types of verbs used to describe the cause and effect. Researchers have hypothesized BIBREF28 that this is because of the innate weighting property of the verbs in the English language that humans use to attribute causes and effects. Another finding is the role of a valence bias: the volume and intensity of causal reasoning may increase due to negative feedback or negative events BIBREF27 . Despite these long lines of research, causal attributions made via social media or online social networks have not been well studied. The goal of this paper is to explore the language and topics of causal statements in a large corpus of social media taken from Twitter. We hypothesize that language and sentiment biases play a significant role in these statements, and that tools from natural language processing and computational linguistics can be used to study them. We do not attempt to study the factual correctness of these statements or offer any degree of verification, nor do we exhaustively identify and extract all causal statements from these data. Instead, here we focus on statements that are with high certainty causal statements, with the goal to better understand key characteristics about causal statements that differ from everyday online communication. The rest of this paper is organized as follows: In Sec. "Materials and Methods" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. "Results" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. "Discussion" . Dataset, filtering, and corpus selection Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work. All document text was processed the same way. Punctuation, XML characters, and hyperlinks were removed, as were Twitter-specific “at-mentions” and “hashtags” (see also the Appendix). There is useful information here, but it is either not natural language text, or it is Twitter-specific, or both. Documents were broken into individual words (unigrams) on whitespace. Casing information was retained, as we will use it for our Named Entity analysis, but otherwise all words were considered lowercase only (see also the Appendix). Stemming BIBREF30 and lemmatization BIBREF31 were not performed. Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively. Tagging and corpus comparison Documents were further studied by annotating their unigrams with Parts-of-Speech (POS) and Named Entities (NE) tags. POS tagging was done using NLTK v3.1 BIBREF29 which implements an averaged perceptron classifier BIBREF32 trained on the Brown Corpus BIBREF33 . (POS tagging is affected by punctuation; we show in the Appendix that our results are relatively robust to the removal of punctuation.) POS tags denote the nouns, verbs, and other grammatical constructs present in a document. Named Entity Recognition (NER) was performed using the 4-class, distributional similarity tagger provided as part of the Stanford CoreNLP v3.6.0 toolkit BIBREF34 . NER aims to identify and classify proper words in a text. The NE classifications considered were: Organization, Location, Person, and Misc. The Stanford NER tagger uses a conditional random field model BIBREF35 trained on diverse sets of manually-tagged English-language data (CoNLL-2003) BIBREF34 . Conditional random fields allow dependencies between words so that `New York' and `New York Times', for example, are classified separately as a location and organization, respectively. These taggers are commonly used and often provide reasonably accurate results, but there is always potential ambiguity in written text and improving upon these methods remains an active area of research. Unigrams, POS, and NEs were compared between the cause and control corpora using odds ratios (ORs): $$\operatorname{OR}(x) = \frac{p_C(x)/ (1-p_C(x))}{p_N(x) / (1-p_N(x))},$$ (Eq. 1) where $p_C(x)$ and $p_N(x)$ are the probabilities that a unigram, POS, or NE $x$ occurs in the causal and control corpus, respectively. These probabilities were computed for each corpus separately as $p(x) = f(x) / \sum _{x^{\prime } \in V} f(x^{\prime })$ , where $f(x)$ is the total number of occurrences of $x$ in the corpus and $V$ is the relevant set of unigrams, POS, or NEs. Confidence intervals for the ORs were computed using Wald's methodology BIBREF36 . As there are many unique unigrams in the text, when computing unigram ORs we focused on the most meaningful unigrams within each corpus by using the following filtering criteria: we considered only the $\operatorname{OR}$ s of the 1500 most frequent unigrams in that corpus that also have a term-frequency-inverse-document-frequency (tf-idf) score above the 90th percentile for that corpus BIBREF37 . The tf-idf was computed as $$\mbox{tf-idf}(w) = \log f(w) \times \log \left(D̑{\mathit {df}(w)} \right) ,$$ (Eq. 2) where $D$ is the total number of documents in the corpus, and $\mathit {df}(w)$ is the number of documents in the corpus containing unigram $w$ . Intuitively, unigrams with higher tf-idf scores appear frequently, but are not so frequent that they are ubiquitous through all documents. Filtering via tf-idf is standard practice in the information retrieval and data mining fields. Cause-trees For a better understanding of the higher-order language structure present in text phrases, cause-trees were constructed. A cause-tree starts with a root cause word (either `caused', `causing' or `causes'), then the two most probable words following (preceding) the root are identified. Next, the root word plus one of the top probable words is combined into a bigram and the top two most probable words following (preceding) this bigram are found. Repeatedly applying this process builds a binary tree representing the $n$ -grams that begin with (terminate at) the root word. This process can continue until a certain $n$ -gram length is reached or until there are no more documents long enough to search. Sentiment analysis Sentimental analysis was applied to estimate the emotional content of documents. Two levels of analysis were used: a method where individual unigrams were given crowdsourced numeric sentiment scores, and a second method involving a trained classifier that can incorporate document-level phrase information. For the first sentiment analysis, each unigram $w$ was assigned a crowdsourced “labMT” sentiment score $s(w)$ BIBREF5 . (Unlike BIBREF5 , scores were recentered by subtracting the mean, $s(w) \leftarrow s(w)-\left<s\right>$ .) Unigrams determined by volunteer raters to have a negative emotional sentiment (`hate',`death', etc.) have $s(w) < 0$ , while unigrams determined to have a positive emotional sentiment (`love', `happy', etc.) tend to have $s(w) > 0$ . Unigrams that have labMT scores and are above the 90th percentile of tf-idf for the corpus form the set $\tilde{V}$ . (Unigrams in $\tilde{V}$ need not be among the 1500 most frequent unigrams.) The set $\tilde{V}$ captures 87.9% (91.5%) of total unigrams in the causal (control) corpus. Crucially, the tf-idf filtering ensures that the words `caused', `causes', and `causing', which have a slight negative sentiment, are not included and do not introduce a systematic bias when comparing the two corpora. This sentiment measure works on a per-unigram basis, and is therefore best suited for large bodies of text, not short documents BIBREF5 . Instead of considering individual documents, the distributions of labMT scores over all unigrams for each corpus was used to compare the corpora. In addition, a single sentiment score for each corpus was computed as the average sentiment score over all unigrams in that corpus, weighed by unigram frequency: $\sum _{w \in \tilde{V}} {f(w) s(w)} \Big / \sum _{w^{\prime } \in \tilde{V}} f(w^{\prime })$ . To supplement this sentiment analysis method, we applied a second method capable of estimating with reasonable accuracy the sentiment of individual documents. We used the sentiment classifier BIBREF38 included in the Stanford CoreNLP v3.6.0 toolkit to documents in each corpus. Documents were individually classified into one of five categories: very negative, negative, neutral, positive, very positive. The data used to train this classifier is taken from positive and negative reviews of movies (Stanford Sentiment Treebank v1.0) BIBREF38 . Topic modeling Lastly, we applied topic modeling to the causal corpus to determine what are the topical foci most discussed in causal statements. Topics were built from the causal corpus using Latent Dirichlet Allocation (LDA) BIBREF39 . Under LDA each document is modeled as a bag-of-words or unordered collection of unigrams. Topics are considered as mixtures of unigrams by estimating conditional distributions over unigrams: $P(w|T)$ , the probability of unigram $w$ given topic $T$ and documents are considered as mixtures of topics via $P(T|d)$ , the probability of topic $T$ given document $d$ . These distributions are then found via statistical inference given the observed distributions of unigrams across documents. The total number of topics is a parameter chosen by the practitioner. For this study we used the MALLET v2.0.8RC3 topic modeling toolkit BIBREF40 for model inference. By inspecting the most probable unigrams per topic (according to $P(w|T)$ ), we found 10 topics provided meaningful and distinct topics. Results We have collected approximately 1M causal statements made on Twitter over the course of 2013, and for a control we gathered the same number of statements selected at random but controlling for time of year (see Methods). We applied Parts-of-Speech (POS) and Named Entity (NE) taggers to all these texts. Some post-processed and tagged example documents, both causal and control, are shown in Fig. 1 A. We also applied sentiment analysis methods to these documents (Methods) and we have highlighted very positive and very negative words throughout Fig. 1 . In Fig. 1 B we present odds ratios for how frequently unigrams (words), POS, or NE appear in causal documents relative to control documents. The three unigrams most strongly skewed towards causal documents were `stress', `problems', and `trouble', while the three most skewed towards control documents were `photo', `ready', and `cute'. While these are only a small number of the unigrams present, this does imply a negative sentiment bias among causal statements (we return to this point shortly). Figure 1 B also presents odds ratios for POS tags, to help us measure the differences in grammatical structure between causal and control documents (see also the Appendix for the effects of punctuation and casing on these odds ratios). The causal corpus showed greater odds for plural nouns (Penn Treebank tag: NNS), plural proper nouns (NNPS), Wh-determiners/pronouns (WDT, WP$) such as `whichever',`whatever', `whose', or `whosever', and predeterminers (PDT) such as `all' or `both'. Predeterminers quantify noun phrases such as `all' in `after all the events that caused you tears', showing that many causal statements, despite the potential brevity of social media, can encompass or delineate classes of agents and/or patients. On the other hand, the causal corpus has lower odds than the control corpus for list items (LS), proper singular nouns (NNP), and interjections (UH). Lastly, Fig. 1 B contains odds ratios for NE tags, allowing us to quantify the types of proper nouns that are more or less likely to appear in causal statements. Of the four tags, only the “Person” tag is less likely in the causal corpus than the control. (This matches the odds ratio for the proper singular noun discussed above.) Perhaps surprisingly, these results together imply that causal statements are less likely to involve individual persons than non-causal statements. There is considerable celebrity news and gossip on social media BIBREF4 ; discussions of celebrities may not be especially focused on attributing causes to these celebrities. All other NE tags, Organization, Location, and Miscellaneous, occur more frequently in the causal corpus than the control. All the odds ratios in Fig. 1 B were significant at the $\alpha = 0.05$ level except the List item marker (LS) POS tag. The unigram analysis in Fig. 1 does not incorporate higher-order phrase structure present in written language. To explore these structures specifically in the causal corpus, we constructed “cause-trees”, shown in Fig. 2 . Inspired by association mining BIBREF41 , a cause-tree is a binary tree rooted at either `caused', `causes', or `causing', that illustrates the most frequently occurring $n$ -grams that either begin or end with that root cause word (see Methods for details). The “causes” tree shows the focused writing (sentence segments) that many people use to express either the relationship between their own actions and a cause-and-effect (“even if it causes”), or the uncontrollable effect a cause may have on themselves: “causes me to have” shows a person's inability to control a causal event (“[...] i have central heterochromia which causes me to have dual colors in both eyes”). The `causing' tree reveals our ability to confine causal patterns to specific areas, and also our ability to be affected by others causal decisions. Phrases like “causing a scene in/at” and “causing a ruckus in/at” (from documents like “causing a ruckus in the hotel lobby typical [...]”) show people commonly associate bounds on where causal actions take place. The causing tree also shows people's tendency to emphasize current negativity: Phrases like “pain this is causing” coming from documents like “cant you see the pain you are causing her” supports the sentiment bias that causal attribution is more likely for negative cause-effect associations. Finally, the `caused' tree focuses heavily on negative events and indicates people are more likely to remember negative causal events. Documents with phrases from the caused tree (“[...] appalling tragedy [...] that caused the death”, “[...] live with this pain that you caused when i was so young [...]”) exemplify the negative events that are focused on are large-scale tragedies or very personal negative events in one's life. Taken together, the popularity of negative sentiment unigrams (Fig. 1 ) and $n$ -grams (Fig. 2 ) among causal documents shows that emotional sentiment or “valence” may play a role in how people perform causal attribution BIBREF27 . The “if it bleeds, it leads” mentality among news media, where violent and negative news are more heavily reported, may appeal to this innate causal association mechanism. (On the other hand, many news media themselves use social media for reporting.) The prevalence of negative sentiment also contrasts with the “better angels of our nature” evidence of Pinker BIBREF42 , illustrating one bias that shows why many find the results of Ref. BIBREF42 surprising. Given this apparent sentiment skew, we further studied sentiment (Fig. 3 ). We compared the sentiment between the corpora in four different ways to investigate the observation (Figs. 1 B and 2 ) that people focus more about negative concepts when they discuss causality. First, we computed the mean sentiment score of each corpus using crowdsourced “labMT” scores weighted by unigram frequency (see Methods). We also applied tf-idf filtering (Methods) to exclude very common words, including the three cause-words, from the mean sentiment score. The causal corpus text was slightly negative on average while the control corpus was slightly positive (Fig. 3 A). The difference in mean sentiment score was significant (t-test: $p < 0.01$ ). Second, we moved from the mean score to the distribution of sentiment across all (scored) unigrams in the causal and control corpora (Fig. 3 B). The causal corpus contained a large group of negative sentiment unigrams, with labMT scores in the approximate range $-3 < s < -1/2$ ; the control corpus had significantly fewer unigrams in this score range. Third, in Fig. 3 C we used POS tags to categorize scored unigrams into nouns, verbs, and adjectives. Studying the distributions for each, we found that nouns explain much of the overall difference observed in Fig. 3 B, with verbs showing a similar but smaller difference between the two corpora. Adjectives showed little difference. The distributions in Fig. 3 C account for 87.8% of scored text in the causal corpus and 77.2% of the control corpus. The difference in sentiment between corpora was significant for all distributions (t-test: $p < 0.01$ ). Fourth, to further confirm that the causal documents tend toward negative sentiment, we applied a separate, independent sentiment analysis using the Stanford NLP sentiment toolkit BIBREF38 to classify the sentiment of individual documents not unigrams (see Methods). Instead of a numeric sentiment score, this classifier assigns documents to one of five categories ranging from very negative to very positive. The classifier showed that the causal corpus contains more negative and very negative documents than the control corpus, while the control corpus contains more neutral, positive, and very positive documents (Fig. 3 D). We have found language (Figs. 1 and 2 ) and sentiment (Fig. 3 ) differences between causal statements made on social media compared with other social media statements. But what is being discussed? What are the topical foci of causal statements? To study this, for our last analysis we applied topic models to the causal statements. Topic modeling finds groups of related terms (unigrams) by considering similarities between how those terms co-occur across a set of documents. We used the popular topic modeling method Latent Dirichlet Allocation (LDA) BIBREF39 . We ranked unigrams by how strongly associated they were with the topic. Inspecting these unigrams we found that a 10-topic model discovered meaningful topics. See Methods for full details. The top unigrams for each topic are shown in Tab. 1 . Topics in the causal corpus tend to fall into three main categories: (i) news, covering current events, weather, etc.; (ii) medicine and health, covering cancer, obesity, stress, etc.; and (iii) relationships, covering problems, stress, crisis, drama, sorry, etc. While the topics are quite different, they are all similar in their use of negative sentiment words. The negative/global features in the `news' topic are captured in the most representative words: damage, fire, power, etc. Similar to news, the `accident' topic balances the more frequent day-to-day minor frustrations with the less frequent but more severe impacts of car accidents. The words `traffic' and `delays' are the most probable words for this topic, and are common, low-impact occurrences. On the contrary, `crash', `car', `accident' and `death' are the next most probable words for the accident topic, and generally show a focus on less-common but higher-impact events. The `medical' topic also focused on negative words; highly probable words for this topic included `cancer', `break', `disease', `blood', etc. Meanwhile, the `body' topic contained words like: `stress', `lose', and `weight', giving a focus on on our more personal struggles with body image. Besides body image, the `injuries' topic uses specific pronouns (`his', `him', `her') in references to a person's own injuries or the injuries of others such as athletes. Aside from more factual information, social information is well represented in causal statements. The `problems' topic shows people attribute their problems to many others with terms like: `dont', `people', `they', `them'. The `stress' topic also uses general words such as `more', `than', or `people' to link stress to all people, and in the same vein, the `crisis' topic focuses on problems within organizations such as governments. The `drama' and `sorry' topics tend towards more specific causal statements. Drama used the words: `like', `she', and `her' while documents in the sorry topic tended to address other people. The topics of causal documents discovered by LDA showed that both general and specific statements are made regarding news, medicine, and relationships when individuals make causal attributions online. Discussion The power of online communication is the speed and ease with which information can be propagated by potentially any connected users. Yet these strengths come at a cost: rumors and misinformation also spread easily. Causal misattribution is at the heart of many rumors, conspiracy theories, and misinformation campaigns. Given the central role of causal statements, further studies of the interplay of information propagation and online causal attributions are crucial. Are causal statements more likely to spread online and, if so, in which ways? What types of social media users are more or less likely to make causal statements? Will a user be more likely to make a causal statement if they have recently been exposed to one or more causal statements from other users? The topics of causal statements also bring forth important questions to be addressed: how timely are causal statements? Are certain topics always being discussed in causal statements? Are there causal topics that are very popular for only brief periods and then forgotten? Temporal dynamics of causal statements are also interesting: do time-of-day or time-of-year factors play a role in how causal statements are made? Our work here focused on a limited subset of causal statements, but more generally, these results may inform new methods for automatically detecting causal statements from unstructured, natural language text BIBREF17 . Better computational tools focused on causal statements are an important step towards further understanding misinformation campaigns and other online activities. Lastly, an important but deeply challenging open question is how, if it is even possible, to validate the accuracy of causal statements. Can causal statements be ranked by some confidence metric(s)? We hope to pursue these and other questions in future research. Parts-of-speech tagging depends on punctuation and casing, which we filtered in our data, so a study of how robust the POS algorithm is to punctuation and casing removal is important. We computed POS tags for the corpora with and without casing as well as with and without punctuation (which includes hashtags, links and at-symbols). Two tags mentioned in Fig. 1 B, NNPS and LS (which was not significant), were affected by punctuation removal. Otherwise, there is a strong correlation (Fig. 4 ) between Odds Ratios (causal vs. control) with punctuation and without punctuation, including casing and without casing ( $\rho = 0.71$ and $0.80$ , respectively), indicating the POS differences between the corpora were primarily not due to the removal of punctuation or casing. Acknowledgments We thank R. Gallagher for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. ISS-1447634.
They identify documents that contain the unigrams 'caused', 'causing', or 'causes'
1baf87437b70cc0375b8b7dc2cfc2830279bc8b5
1baf87437b70cc0375b8b7dc2cfc2830279bc8b5_0
Q: What is the source of the "control" corpus? Text: Introduction Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect. Causal inference is a crucial way that humans comprehend the world, and it has been a major focus of philosophy, statistics, mathematics, psychology, and the cognitive sciences. Philosophers such as Hume and Kant have long argued whether causality is a human-centric illusion or the discovery of a priori truth BIBREF10 , BIBREF11 . Causal inference in science is incredibly important, and researchers have developed statistical measures such as Granger causality BIBREF12 , mathematical and probabilistic frameworks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , and text mining procedures BIBREF17 , BIBREF18 , BIBREF19 to better infer causal influence from data. In the cognitive sciences, the famous perception experiments of Michotte et al. led to a long line of research exploring the cognitive biases that humans possess when attempting to link cause and effect BIBREF20 , BIBREF21 , BIBREF22 . How humans understand and communicate cause and effect relationships is complicated, and is influenced by language structure BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 and sentiment or valence BIBREF27 . A key finding is that the perceived emphasis or causal weight changes between the agent (the grammatical construct responsible for a cause) and the patient (the construct effected by the cause) depending on the types of verbs used to describe the cause and effect. Researchers have hypothesized BIBREF28 that this is because of the innate weighting property of the verbs in the English language that humans use to attribute causes and effects. Another finding is the role of a valence bias: the volume and intensity of causal reasoning may increase due to negative feedback or negative events BIBREF27 . Despite these long lines of research, causal attributions made via social media or online social networks have not been well studied. The goal of this paper is to explore the language and topics of causal statements in a large corpus of social media taken from Twitter. We hypothesize that language and sentiment biases play a significant role in these statements, and that tools from natural language processing and computational linguistics can be used to study them. We do not attempt to study the factual correctness of these statements or offer any degree of verification, nor do we exhaustively identify and extract all causal statements from these data. Instead, here we focus on statements that are with high certainty causal statements, with the goal to better understand key characteristics about causal statements that differ from everyday online communication. The rest of this paper is organized as follows: In Sec. "Materials and Methods" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. "Results" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. "Discussion" . Dataset, filtering, and corpus selection Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work. All document text was processed the same way. Punctuation, XML characters, and hyperlinks were removed, as were Twitter-specific “at-mentions” and “hashtags” (see also the Appendix). There is useful information here, but it is either not natural language text, or it is Twitter-specific, or both. Documents were broken into individual words (unigrams) on whitespace. Casing information was retained, as we will use it for our Named Entity analysis, but otherwise all words were considered lowercase only (see also the Appendix). Stemming BIBREF30 and lemmatization BIBREF31 were not performed. Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively. Tagging and corpus comparison Documents were further studied by annotating their unigrams with Parts-of-Speech (POS) and Named Entities (NE) tags. POS tagging was done using NLTK v3.1 BIBREF29 which implements an averaged perceptron classifier BIBREF32 trained on the Brown Corpus BIBREF33 . (POS tagging is affected by punctuation; we show in the Appendix that our results are relatively robust to the removal of punctuation.) POS tags denote the nouns, verbs, and other grammatical constructs present in a document. Named Entity Recognition (NER) was performed using the 4-class, distributional similarity tagger provided as part of the Stanford CoreNLP v3.6.0 toolkit BIBREF34 . NER aims to identify and classify proper words in a text. The NE classifications considered were: Organization, Location, Person, and Misc. The Stanford NER tagger uses a conditional random field model BIBREF35 trained on diverse sets of manually-tagged English-language data (CoNLL-2003) BIBREF34 . Conditional random fields allow dependencies between words so that `New York' and `New York Times', for example, are classified separately as a location and organization, respectively. These taggers are commonly used and often provide reasonably accurate results, but there is always potential ambiguity in written text and improving upon these methods remains an active area of research. Unigrams, POS, and NEs were compared between the cause and control corpora using odds ratios (ORs): $$\operatorname{OR}(x) = \frac{p_C(x)/ (1-p_C(x))}{p_N(x) / (1-p_N(x))},$$ (Eq. 1) where $p_C(x)$ and $p_N(x)$ are the probabilities that a unigram, POS, or NE $x$ occurs in the causal and control corpus, respectively. These probabilities were computed for each corpus separately as $p(x) = f(x) / \sum _{x^{\prime } \in V} f(x^{\prime })$ , where $f(x)$ is the total number of occurrences of $x$ in the corpus and $V$ is the relevant set of unigrams, POS, or NEs. Confidence intervals for the ORs were computed using Wald's methodology BIBREF36 . As there are many unique unigrams in the text, when computing unigram ORs we focused on the most meaningful unigrams within each corpus by using the following filtering criteria: we considered only the $\operatorname{OR}$ s of the 1500 most frequent unigrams in that corpus that also have a term-frequency-inverse-document-frequency (tf-idf) score above the 90th percentile for that corpus BIBREF37 . The tf-idf was computed as $$\mbox{tf-idf}(w) = \log f(w) \times \log \left(D̑{\mathit {df}(w)} \right) ,$$ (Eq. 2) where $D$ is the total number of documents in the corpus, and $\mathit {df}(w)$ is the number of documents in the corpus containing unigram $w$ . Intuitively, unigrams with higher tf-idf scores appear frequently, but are not so frequent that they are ubiquitous through all documents. Filtering via tf-idf is standard practice in the information retrieval and data mining fields. Cause-trees For a better understanding of the higher-order language structure present in text phrases, cause-trees were constructed. A cause-tree starts with a root cause word (either `caused', `causing' or `causes'), then the two most probable words following (preceding) the root are identified. Next, the root word plus one of the top probable words is combined into a bigram and the top two most probable words following (preceding) this bigram are found. Repeatedly applying this process builds a binary tree representing the $n$ -grams that begin with (terminate at) the root word. This process can continue until a certain $n$ -gram length is reached or until there are no more documents long enough to search. Sentiment analysis Sentimental analysis was applied to estimate the emotional content of documents. Two levels of analysis were used: a method where individual unigrams were given crowdsourced numeric sentiment scores, and a second method involving a trained classifier that can incorporate document-level phrase information. For the first sentiment analysis, each unigram $w$ was assigned a crowdsourced “labMT” sentiment score $s(w)$ BIBREF5 . (Unlike BIBREF5 , scores were recentered by subtracting the mean, $s(w) \leftarrow s(w)-\left<s\right>$ .) Unigrams determined by volunteer raters to have a negative emotional sentiment (`hate',`death', etc.) have $s(w) < 0$ , while unigrams determined to have a positive emotional sentiment (`love', `happy', etc.) tend to have $s(w) > 0$ . Unigrams that have labMT scores and are above the 90th percentile of tf-idf for the corpus form the set $\tilde{V}$ . (Unigrams in $\tilde{V}$ need not be among the 1500 most frequent unigrams.) The set $\tilde{V}$ captures 87.9% (91.5%) of total unigrams in the causal (control) corpus. Crucially, the tf-idf filtering ensures that the words `caused', `causes', and `causing', which have a slight negative sentiment, are not included and do not introduce a systematic bias when comparing the two corpora. This sentiment measure works on a per-unigram basis, and is therefore best suited for large bodies of text, not short documents BIBREF5 . Instead of considering individual documents, the distributions of labMT scores over all unigrams for each corpus was used to compare the corpora. In addition, a single sentiment score for each corpus was computed as the average sentiment score over all unigrams in that corpus, weighed by unigram frequency: $\sum _{w \in \tilde{V}} {f(w) s(w)} \Big / \sum _{w^{\prime } \in \tilde{V}} f(w^{\prime })$ . To supplement this sentiment analysis method, we applied a second method capable of estimating with reasonable accuracy the sentiment of individual documents. We used the sentiment classifier BIBREF38 included in the Stanford CoreNLP v3.6.0 toolkit to documents in each corpus. Documents were individually classified into one of five categories: very negative, negative, neutral, positive, very positive. The data used to train this classifier is taken from positive and negative reviews of movies (Stanford Sentiment Treebank v1.0) BIBREF38 . Topic modeling Lastly, we applied topic modeling to the causal corpus to determine what are the topical foci most discussed in causal statements. Topics were built from the causal corpus using Latent Dirichlet Allocation (LDA) BIBREF39 . Under LDA each document is modeled as a bag-of-words or unordered collection of unigrams. Topics are considered as mixtures of unigrams by estimating conditional distributions over unigrams: $P(w|T)$ , the probability of unigram $w$ given topic $T$ and documents are considered as mixtures of topics via $P(T|d)$ , the probability of topic $T$ given document $d$ . These distributions are then found via statistical inference given the observed distributions of unigrams across documents. The total number of topics is a parameter chosen by the practitioner. For this study we used the MALLET v2.0.8RC3 topic modeling toolkit BIBREF40 for model inference. By inspecting the most probable unigrams per topic (according to $P(w|T)$ ), we found 10 topics provided meaningful and distinct topics. Results We have collected approximately 1M causal statements made on Twitter over the course of 2013, and for a control we gathered the same number of statements selected at random but controlling for time of year (see Methods). We applied Parts-of-Speech (POS) and Named Entity (NE) taggers to all these texts. Some post-processed and tagged example documents, both causal and control, are shown in Fig. 1 A. We also applied sentiment analysis methods to these documents (Methods) and we have highlighted very positive and very negative words throughout Fig. 1 . In Fig. 1 B we present odds ratios for how frequently unigrams (words), POS, or NE appear in causal documents relative to control documents. The three unigrams most strongly skewed towards causal documents were `stress', `problems', and `trouble', while the three most skewed towards control documents were `photo', `ready', and `cute'. While these are only a small number of the unigrams present, this does imply a negative sentiment bias among causal statements (we return to this point shortly). Figure 1 B also presents odds ratios for POS tags, to help us measure the differences in grammatical structure between causal and control documents (see also the Appendix for the effects of punctuation and casing on these odds ratios). The causal corpus showed greater odds for plural nouns (Penn Treebank tag: NNS), plural proper nouns (NNPS), Wh-determiners/pronouns (WDT, WP$) such as `whichever',`whatever', `whose', or `whosever', and predeterminers (PDT) such as `all' or `both'. Predeterminers quantify noun phrases such as `all' in `after all the events that caused you tears', showing that many causal statements, despite the potential brevity of social media, can encompass or delineate classes of agents and/or patients. On the other hand, the causal corpus has lower odds than the control corpus for list items (LS), proper singular nouns (NNP), and interjections (UH). Lastly, Fig. 1 B contains odds ratios for NE tags, allowing us to quantify the types of proper nouns that are more or less likely to appear in causal statements. Of the four tags, only the “Person” tag is less likely in the causal corpus than the control. (This matches the odds ratio for the proper singular noun discussed above.) Perhaps surprisingly, these results together imply that causal statements are less likely to involve individual persons than non-causal statements. There is considerable celebrity news and gossip on social media BIBREF4 ; discussions of celebrities may not be especially focused on attributing causes to these celebrities. All other NE tags, Organization, Location, and Miscellaneous, occur more frequently in the causal corpus than the control. All the odds ratios in Fig. 1 B were significant at the $\alpha = 0.05$ level except the List item marker (LS) POS tag. The unigram analysis in Fig. 1 does not incorporate higher-order phrase structure present in written language. To explore these structures specifically in the causal corpus, we constructed “cause-trees”, shown in Fig. 2 . Inspired by association mining BIBREF41 , a cause-tree is a binary tree rooted at either `caused', `causes', or `causing', that illustrates the most frequently occurring $n$ -grams that either begin or end with that root cause word (see Methods for details). The “causes” tree shows the focused writing (sentence segments) that many people use to express either the relationship between their own actions and a cause-and-effect (“even if it causes”), or the uncontrollable effect a cause may have on themselves: “causes me to have” shows a person's inability to control a causal event (“[...] i have central heterochromia which causes me to have dual colors in both eyes”). The `causing' tree reveals our ability to confine causal patterns to specific areas, and also our ability to be affected by others causal decisions. Phrases like “causing a scene in/at” and “causing a ruckus in/at” (from documents like “causing a ruckus in the hotel lobby typical [...]”) show people commonly associate bounds on where causal actions take place. The causing tree also shows people's tendency to emphasize current negativity: Phrases like “pain this is causing” coming from documents like “cant you see the pain you are causing her” supports the sentiment bias that causal attribution is more likely for negative cause-effect associations. Finally, the `caused' tree focuses heavily on negative events and indicates people are more likely to remember negative causal events. Documents with phrases from the caused tree (“[...] appalling tragedy [...] that caused the death”, “[...] live with this pain that you caused when i was so young [...]”) exemplify the negative events that are focused on are large-scale tragedies or very personal negative events in one's life. Taken together, the popularity of negative sentiment unigrams (Fig. 1 ) and $n$ -grams (Fig. 2 ) among causal documents shows that emotional sentiment or “valence” may play a role in how people perform causal attribution BIBREF27 . The “if it bleeds, it leads” mentality among news media, where violent and negative news are more heavily reported, may appeal to this innate causal association mechanism. (On the other hand, many news media themselves use social media for reporting.) The prevalence of negative sentiment also contrasts with the “better angels of our nature” evidence of Pinker BIBREF42 , illustrating one bias that shows why many find the results of Ref. BIBREF42 surprising. Given this apparent sentiment skew, we further studied sentiment (Fig. 3 ). We compared the sentiment between the corpora in four different ways to investigate the observation (Figs. 1 B and 2 ) that people focus more about negative concepts when they discuss causality. First, we computed the mean sentiment score of each corpus using crowdsourced “labMT” scores weighted by unigram frequency (see Methods). We also applied tf-idf filtering (Methods) to exclude very common words, including the three cause-words, from the mean sentiment score. The causal corpus text was slightly negative on average while the control corpus was slightly positive (Fig. 3 A). The difference in mean sentiment score was significant (t-test: $p < 0.01$ ). Second, we moved from the mean score to the distribution of sentiment across all (scored) unigrams in the causal and control corpora (Fig. 3 B). The causal corpus contained a large group of negative sentiment unigrams, with labMT scores in the approximate range $-3 < s < -1/2$ ; the control corpus had significantly fewer unigrams in this score range. Third, in Fig. 3 C we used POS tags to categorize scored unigrams into nouns, verbs, and adjectives. Studying the distributions for each, we found that nouns explain much of the overall difference observed in Fig. 3 B, with verbs showing a similar but smaller difference between the two corpora. Adjectives showed little difference. The distributions in Fig. 3 C account for 87.8% of scored text in the causal corpus and 77.2% of the control corpus. The difference in sentiment between corpora was significant for all distributions (t-test: $p < 0.01$ ). Fourth, to further confirm that the causal documents tend toward negative sentiment, we applied a separate, independent sentiment analysis using the Stanford NLP sentiment toolkit BIBREF38 to classify the sentiment of individual documents not unigrams (see Methods). Instead of a numeric sentiment score, this classifier assigns documents to one of five categories ranging from very negative to very positive. The classifier showed that the causal corpus contains more negative and very negative documents than the control corpus, while the control corpus contains more neutral, positive, and very positive documents (Fig. 3 D). We have found language (Figs. 1 and 2 ) and sentiment (Fig. 3 ) differences between causal statements made on social media compared with other social media statements. But what is being discussed? What are the topical foci of causal statements? To study this, for our last analysis we applied topic models to the causal statements. Topic modeling finds groups of related terms (unigrams) by considering similarities between how those terms co-occur across a set of documents. We used the popular topic modeling method Latent Dirichlet Allocation (LDA) BIBREF39 . We ranked unigrams by how strongly associated they were with the topic. Inspecting these unigrams we found that a 10-topic model discovered meaningful topics. See Methods for full details. The top unigrams for each topic are shown in Tab. 1 . Topics in the causal corpus tend to fall into three main categories: (i) news, covering current events, weather, etc.; (ii) medicine and health, covering cancer, obesity, stress, etc.; and (iii) relationships, covering problems, stress, crisis, drama, sorry, etc. While the topics are quite different, they are all similar in their use of negative sentiment words. The negative/global features in the `news' topic are captured in the most representative words: damage, fire, power, etc. Similar to news, the `accident' topic balances the more frequent day-to-day minor frustrations with the less frequent but more severe impacts of car accidents. The words `traffic' and `delays' are the most probable words for this topic, and are common, low-impact occurrences. On the contrary, `crash', `car', `accident' and `death' are the next most probable words for the accident topic, and generally show a focus on less-common but higher-impact events. The `medical' topic also focused on negative words; highly probable words for this topic included `cancer', `break', `disease', `blood', etc. Meanwhile, the `body' topic contained words like: `stress', `lose', and `weight', giving a focus on on our more personal struggles with body image. Besides body image, the `injuries' topic uses specific pronouns (`his', `him', `her') in references to a person's own injuries or the injuries of others such as athletes. Aside from more factual information, social information is well represented in causal statements. The `problems' topic shows people attribute their problems to many others with terms like: `dont', `people', `they', `them'. The `stress' topic also uses general words such as `more', `than', or `people' to link stress to all people, and in the same vein, the `crisis' topic focuses on problems within organizations such as governments. The `drama' and `sorry' topics tend towards more specific causal statements. Drama used the words: `like', `she', and `her' while documents in the sorry topic tended to address other people. The topics of causal documents discovered by LDA showed that both general and specific statements are made regarding news, medicine, and relationships when individuals make causal attributions online. Discussion The power of online communication is the speed and ease with which information can be propagated by potentially any connected users. Yet these strengths come at a cost: rumors and misinformation also spread easily. Causal misattribution is at the heart of many rumors, conspiracy theories, and misinformation campaigns. Given the central role of causal statements, further studies of the interplay of information propagation and online causal attributions are crucial. Are causal statements more likely to spread online and, if so, in which ways? What types of social media users are more or less likely to make causal statements? Will a user be more likely to make a causal statement if they have recently been exposed to one or more causal statements from other users? The topics of causal statements also bring forth important questions to be addressed: how timely are causal statements? Are certain topics always being discussed in causal statements? Are there causal topics that are very popular for only brief periods and then forgotten? Temporal dynamics of causal statements are also interesting: do time-of-day or time-of-year factors play a role in how causal statements are made? Our work here focused on a limited subset of causal statements, but more generally, these results may inform new methods for automatically detecting causal statements from unstructured, natural language text BIBREF17 . Better computational tools focused on causal statements are an important step towards further understanding misinformation campaigns and other online activities. Lastly, an important but deeply challenging open question is how, if it is even possible, to validate the accuracy of causal statements. Can causal statements be ranked by some confidence metric(s)? We hope to pursue these and other questions in future research. Parts-of-speech tagging depends on punctuation and casing, which we filtered in our data, so a study of how robust the POS algorithm is to punctuation and casing removal is important. We computed POS tags for the corpora with and without casing as well as with and without punctuation (which includes hashtags, links and at-symbols). Two tags mentioned in Fig. 1 B, NNPS and LS (which was not significant), were affected by punctuation removal. Otherwise, there is a strong correlation (Fig. 4 ) between Odds Ratios (causal vs. control) with punctuation and without punctuation, including casing and without casing ( $\rho = 0.71$ and $0.80$ , respectively), indicating the POS differences between the corpora were primarily not due to the removal of punctuation or casing. Acknowledgments We thank R. Gallagher for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. ISS-1447634.
Randomly selected from a Twitter dump, temporally matched to causal documents
0b31eb5bb111770a3aaf8a3931d8613e578e07a8
0b31eb5bb111770a3aaf8a3931d8613e578e07a8_0
Q: What are the selection criteria for "causal statements"? Text: Introduction Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect. Causal inference is a crucial way that humans comprehend the world, and it has been a major focus of philosophy, statistics, mathematics, psychology, and the cognitive sciences. Philosophers such as Hume and Kant have long argued whether causality is a human-centric illusion or the discovery of a priori truth BIBREF10 , BIBREF11 . Causal inference in science is incredibly important, and researchers have developed statistical measures such as Granger causality BIBREF12 , mathematical and probabilistic frameworks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , and text mining procedures BIBREF17 , BIBREF18 , BIBREF19 to better infer causal influence from data. In the cognitive sciences, the famous perception experiments of Michotte et al. led to a long line of research exploring the cognitive biases that humans possess when attempting to link cause and effect BIBREF20 , BIBREF21 , BIBREF22 . How humans understand and communicate cause and effect relationships is complicated, and is influenced by language structure BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 and sentiment or valence BIBREF27 . A key finding is that the perceived emphasis or causal weight changes between the agent (the grammatical construct responsible for a cause) and the patient (the construct effected by the cause) depending on the types of verbs used to describe the cause and effect. Researchers have hypothesized BIBREF28 that this is because of the innate weighting property of the verbs in the English language that humans use to attribute causes and effects. Another finding is the role of a valence bias: the volume and intensity of causal reasoning may increase due to negative feedback or negative events BIBREF27 . Despite these long lines of research, causal attributions made via social media or online social networks have not been well studied. The goal of this paper is to explore the language and topics of causal statements in a large corpus of social media taken from Twitter. We hypothesize that language and sentiment biases play a significant role in these statements, and that tools from natural language processing and computational linguistics can be used to study them. We do not attempt to study the factual correctness of these statements or offer any degree of verification, nor do we exhaustively identify and extract all causal statements from these data. Instead, here we focus on statements that are with high certainty causal statements, with the goal to better understand key characteristics about causal statements that differ from everyday online communication. The rest of this paper is organized as follows: In Sec. "Materials and Methods" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. "Results" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. "Discussion" . Dataset, filtering, and corpus selection Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work. All document text was processed the same way. Punctuation, XML characters, and hyperlinks were removed, as were Twitter-specific “at-mentions” and “hashtags” (see also the Appendix). There is useful information here, but it is either not natural language text, or it is Twitter-specific, or both. Documents were broken into individual words (unigrams) on whitespace. Casing information was retained, as we will use it for our Named Entity analysis, but otherwise all words were considered lowercase only (see also the Appendix). Stemming BIBREF30 and lemmatization BIBREF31 were not performed. Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively. Tagging and corpus comparison Documents were further studied by annotating their unigrams with Parts-of-Speech (POS) and Named Entities (NE) tags. POS tagging was done using NLTK v3.1 BIBREF29 which implements an averaged perceptron classifier BIBREF32 trained on the Brown Corpus BIBREF33 . (POS tagging is affected by punctuation; we show in the Appendix that our results are relatively robust to the removal of punctuation.) POS tags denote the nouns, verbs, and other grammatical constructs present in a document. Named Entity Recognition (NER) was performed using the 4-class, distributional similarity tagger provided as part of the Stanford CoreNLP v3.6.0 toolkit BIBREF34 . NER aims to identify and classify proper words in a text. The NE classifications considered were: Organization, Location, Person, and Misc. The Stanford NER tagger uses a conditional random field model BIBREF35 trained on diverse sets of manually-tagged English-language data (CoNLL-2003) BIBREF34 . Conditional random fields allow dependencies between words so that `New York' and `New York Times', for example, are classified separately as a location and organization, respectively. These taggers are commonly used and often provide reasonably accurate results, but there is always potential ambiguity in written text and improving upon these methods remains an active area of research. Unigrams, POS, and NEs were compared between the cause and control corpora using odds ratios (ORs): $$\operatorname{OR}(x) = \frac{p_C(x)/ (1-p_C(x))}{p_N(x) / (1-p_N(x))},$$ (Eq. 1) where $p_C(x)$ and $p_N(x)$ are the probabilities that a unigram, POS, or NE $x$ occurs in the causal and control corpus, respectively. These probabilities were computed for each corpus separately as $p(x) = f(x) / \sum _{x^{\prime } \in V} f(x^{\prime })$ , where $f(x)$ is the total number of occurrences of $x$ in the corpus and $V$ is the relevant set of unigrams, POS, or NEs. Confidence intervals for the ORs were computed using Wald's methodology BIBREF36 . As there are many unique unigrams in the text, when computing unigram ORs we focused on the most meaningful unigrams within each corpus by using the following filtering criteria: we considered only the $\operatorname{OR}$ s of the 1500 most frequent unigrams in that corpus that also have a term-frequency-inverse-document-frequency (tf-idf) score above the 90th percentile for that corpus BIBREF37 . The tf-idf was computed as $$\mbox{tf-idf}(w) = \log f(w) \times \log \left(D̑{\mathit {df}(w)} \right) ,$$ (Eq. 2) where $D$ is the total number of documents in the corpus, and $\mathit {df}(w)$ is the number of documents in the corpus containing unigram $w$ . Intuitively, unigrams with higher tf-idf scores appear frequently, but are not so frequent that they are ubiquitous through all documents. Filtering via tf-idf is standard practice in the information retrieval and data mining fields. Cause-trees For a better understanding of the higher-order language structure present in text phrases, cause-trees were constructed. A cause-tree starts with a root cause word (either `caused', `causing' or `causes'), then the two most probable words following (preceding) the root are identified. Next, the root word plus one of the top probable words is combined into a bigram and the top two most probable words following (preceding) this bigram are found. Repeatedly applying this process builds a binary tree representing the $n$ -grams that begin with (terminate at) the root word. This process can continue until a certain $n$ -gram length is reached or until there are no more documents long enough to search. Sentiment analysis Sentimental analysis was applied to estimate the emotional content of documents. Two levels of analysis were used: a method where individual unigrams were given crowdsourced numeric sentiment scores, and a second method involving a trained classifier that can incorporate document-level phrase information. For the first sentiment analysis, each unigram $w$ was assigned a crowdsourced “labMT” sentiment score $s(w)$ BIBREF5 . (Unlike BIBREF5 , scores were recentered by subtracting the mean, $s(w) \leftarrow s(w)-\left<s\right>$ .) Unigrams determined by volunteer raters to have a negative emotional sentiment (`hate',`death', etc.) have $s(w) < 0$ , while unigrams determined to have a positive emotional sentiment (`love', `happy', etc.) tend to have $s(w) > 0$ . Unigrams that have labMT scores and are above the 90th percentile of tf-idf for the corpus form the set $\tilde{V}$ . (Unigrams in $\tilde{V}$ need not be among the 1500 most frequent unigrams.) The set $\tilde{V}$ captures 87.9% (91.5%) of total unigrams in the causal (control) corpus. Crucially, the tf-idf filtering ensures that the words `caused', `causes', and `causing', which have a slight negative sentiment, are not included and do not introduce a systematic bias when comparing the two corpora. This sentiment measure works on a per-unigram basis, and is therefore best suited for large bodies of text, not short documents BIBREF5 . Instead of considering individual documents, the distributions of labMT scores over all unigrams for each corpus was used to compare the corpora. In addition, a single sentiment score for each corpus was computed as the average sentiment score over all unigrams in that corpus, weighed by unigram frequency: $\sum _{w \in \tilde{V}} {f(w) s(w)} \Big / \sum _{w^{\prime } \in \tilde{V}} f(w^{\prime })$ . To supplement this sentiment analysis method, we applied a second method capable of estimating with reasonable accuracy the sentiment of individual documents. We used the sentiment classifier BIBREF38 included in the Stanford CoreNLP v3.6.0 toolkit to documents in each corpus. Documents were individually classified into one of five categories: very negative, negative, neutral, positive, very positive. The data used to train this classifier is taken from positive and negative reviews of movies (Stanford Sentiment Treebank v1.0) BIBREF38 . Topic modeling Lastly, we applied topic modeling to the causal corpus to determine what are the topical foci most discussed in causal statements. Topics were built from the causal corpus using Latent Dirichlet Allocation (LDA) BIBREF39 . Under LDA each document is modeled as a bag-of-words or unordered collection of unigrams. Topics are considered as mixtures of unigrams by estimating conditional distributions over unigrams: $P(w|T)$ , the probability of unigram $w$ given topic $T$ and documents are considered as mixtures of topics via $P(T|d)$ , the probability of topic $T$ given document $d$ . These distributions are then found via statistical inference given the observed distributions of unigrams across documents. The total number of topics is a parameter chosen by the practitioner. For this study we used the MALLET v2.0.8RC3 topic modeling toolkit BIBREF40 for model inference. By inspecting the most probable unigrams per topic (according to $P(w|T)$ ), we found 10 topics provided meaningful and distinct topics. Results We have collected approximately 1M causal statements made on Twitter over the course of 2013, and for a control we gathered the same number of statements selected at random but controlling for time of year (see Methods). We applied Parts-of-Speech (POS) and Named Entity (NE) taggers to all these texts. Some post-processed and tagged example documents, both causal and control, are shown in Fig. 1 A. We also applied sentiment analysis methods to these documents (Methods) and we have highlighted very positive and very negative words throughout Fig. 1 . In Fig. 1 B we present odds ratios for how frequently unigrams (words), POS, or NE appear in causal documents relative to control documents. The three unigrams most strongly skewed towards causal documents were `stress', `problems', and `trouble', while the three most skewed towards control documents were `photo', `ready', and `cute'. While these are only a small number of the unigrams present, this does imply a negative sentiment bias among causal statements (we return to this point shortly). Figure 1 B also presents odds ratios for POS tags, to help us measure the differences in grammatical structure between causal and control documents (see also the Appendix for the effects of punctuation and casing on these odds ratios). The causal corpus showed greater odds for plural nouns (Penn Treebank tag: NNS), plural proper nouns (NNPS), Wh-determiners/pronouns (WDT, WP$) such as `whichever',`whatever', `whose', or `whosever', and predeterminers (PDT) such as `all' or `both'. Predeterminers quantify noun phrases such as `all' in `after all the events that caused you tears', showing that many causal statements, despite the potential brevity of social media, can encompass or delineate classes of agents and/or patients. On the other hand, the causal corpus has lower odds than the control corpus for list items (LS), proper singular nouns (NNP), and interjections (UH). Lastly, Fig. 1 B contains odds ratios for NE tags, allowing us to quantify the types of proper nouns that are more or less likely to appear in causal statements. Of the four tags, only the “Person” tag is less likely in the causal corpus than the control. (This matches the odds ratio for the proper singular noun discussed above.) Perhaps surprisingly, these results together imply that causal statements are less likely to involve individual persons than non-causal statements. There is considerable celebrity news and gossip on social media BIBREF4 ; discussions of celebrities may not be especially focused on attributing causes to these celebrities. All other NE tags, Organization, Location, and Miscellaneous, occur more frequently in the causal corpus than the control. All the odds ratios in Fig. 1 B were significant at the $\alpha = 0.05$ level except the List item marker (LS) POS tag. The unigram analysis in Fig. 1 does not incorporate higher-order phrase structure present in written language. To explore these structures specifically in the causal corpus, we constructed “cause-trees”, shown in Fig. 2 . Inspired by association mining BIBREF41 , a cause-tree is a binary tree rooted at either `caused', `causes', or `causing', that illustrates the most frequently occurring $n$ -grams that either begin or end with that root cause word (see Methods for details). The “causes” tree shows the focused writing (sentence segments) that many people use to express either the relationship between their own actions and a cause-and-effect (“even if it causes”), or the uncontrollable effect a cause may have on themselves: “causes me to have” shows a person's inability to control a causal event (“[...] i have central heterochromia which causes me to have dual colors in both eyes”). The `causing' tree reveals our ability to confine causal patterns to specific areas, and also our ability to be affected by others causal decisions. Phrases like “causing a scene in/at” and “causing a ruckus in/at” (from documents like “causing a ruckus in the hotel lobby typical [...]”) show people commonly associate bounds on where causal actions take place. The causing tree also shows people's tendency to emphasize current negativity: Phrases like “pain this is causing” coming from documents like “cant you see the pain you are causing her” supports the sentiment bias that causal attribution is more likely for negative cause-effect associations. Finally, the `caused' tree focuses heavily on negative events and indicates people are more likely to remember negative causal events. Documents with phrases from the caused tree (“[...] appalling tragedy [...] that caused the death”, “[...] live with this pain that you caused when i was so young [...]”) exemplify the negative events that are focused on are large-scale tragedies or very personal negative events in one's life. Taken together, the popularity of negative sentiment unigrams (Fig. 1 ) and $n$ -grams (Fig. 2 ) among causal documents shows that emotional sentiment or “valence” may play a role in how people perform causal attribution BIBREF27 . The “if it bleeds, it leads” mentality among news media, where violent and negative news are more heavily reported, may appeal to this innate causal association mechanism. (On the other hand, many news media themselves use social media for reporting.) The prevalence of negative sentiment also contrasts with the “better angels of our nature” evidence of Pinker BIBREF42 , illustrating one bias that shows why many find the results of Ref. BIBREF42 surprising. Given this apparent sentiment skew, we further studied sentiment (Fig. 3 ). We compared the sentiment between the corpora in four different ways to investigate the observation (Figs. 1 B and 2 ) that people focus more about negative concepts when they discuss causality. First, we computed the mean sentiment score of each corpus using crowdsourced “labMT” scores weighted by unigram frequency (see Methods). We also applied tf-idf filtering (Methods) to exclude very common words, including the three cause-words, from the mean sentiment score. The causal corpus text was slightly negative on average while the control corpus was slightly positive (Fig. 3 A). The difference in mean sentiment score was significant (t-test: $p < 0.01$ ). Second, we moved from the mean score to the distribution of sentiment across all (scored) unigrams in the causal and control corpora (Fig. 3 B). The causal corpus contained a large group of negative sentiment unigrams, with labMT scores in the approximate range $-3 < s < -1/2$ ; the control corpus had significantly fewer unigrams in this score range. Third, in Fig. 3 C we used POS tags to categorize scored unigrams into nouns, verbs, and adjectives. Studying the distributions for each, we found that nouns explain much of the overall difference observed in Fig. 3 B, with verbs showing a similar but smaller difference between the two corpora. Adjectives showed little difference. The distributions in Fig. 3 C account for 87.8% of scored text in the causal corpus and 77.2% of the control corpus. The difference in sentiment between corpora was significant for all distributions (t-test: $p < 0.01$ ). Fourth, to further confirm that the causal documents tend toward negative sentiment, we applied a separate, independent sentiment analysis using the Stanford NLP sentiment toolkit BIBREF38 to classify the sentiment of individual documents not unigrams (see Methods). Instead of a numeric sentiment score, this classifier assigns documents to one of five categories ranging from very negative to very positive. The classifier showed that the causal corpus contains more negative and very negative documents than the control corpus, while the control corpus contains more neutral, positive, and very positive documents (Fig. 3 D). We have found language (Figs. 1 and 2 ) and sentiment (Fig. 3 ) differences between causal statements made on social media compared with other social media statements. But what is being discussed? What are the topical foci of causal statements? To study this, for our last analysis we applied topic models to the causal statements. Topic modeling finds groups of related terms (unigrams) by considering similarities between how those terms co-occur across a set of documents. We used the popular topic modeling method Latent Dirichlet Allocation (LDA) BIBREF39 . We ranked unigrams by how strongly associated they were with the topic. Inspecting these unigrams we found that a 10-topic model discovered meaningful topics. See Methods for full details. The top unigrams for each topic are shown in Tab. 1 . Topics in the causal corpus tend to fall into three main categories: (i) news, covering current events, weather, etc.; (ii) medicine and health, covering cancer, obesity, stress, etc.; and (iii) relationships, covering problems, stress, crisis, drama, sorry, etc. While the topics are quite different, they are all similar in their use of negative sentiment words. The negative/global features in the `news' topic are captured in the most representative words: damage, fire, power, etc. Similar to news, the `accident' topic balances the more frequent day-to-day minor frustrations with the less frequent but more severe impacts of car accidents. The words `traffic' and `delays' are the most probable words for this topic, and are common, low-impact occurrences. On the contrary, `crash', `car', `accident' and `death' are the next most probable words for the accident topic, and generally show a focus on less-common but higher-impact events. The `medical' topic also focused on negative words; highly probable words for this topic included `cancer', `break', `disease', `blood', etc. Meanwhile, the `body' topic contained words like: `stress', `lose', and `weight', giving a focus on on our more personal struggles with body image. Besides body image, the `injuries' topic uses specific pronouns (`his', `him', `her') in references to a person's own injuries or the injuries of others such as athletes. Aside from more factual information, social information is well represented in causal statements. The `problems' topic shows people attribute their problems to many others with terms like: `dont', `people', `they', `them'. The `stress' topic also uses general words such as `more', `than', or `people' to link stress to all people, and in the same vein, the `crisis' topic focuses on problems within organizations such as governments. The `drama' and `sorry' topics tend towards more specific causal statements. Drama used the words: `like', `she', and `her' while documents in the sorry topic tended to address other people. The topics of causal documents discovered by LDA showed that both general and specific statements are made regarding news, medicine, and relationships when individuals make causal attributions online. Discussion The power of online communication is the speed and ease with which information can be propagated by potentially any connected users. Yet these strengths come at a cost: rumors and misinformation also spread easily. Causal misattribution is at the heart of many rumors, conspiracy theories, and misinformation campaigns. Given the central role of causal statements, further studies of the interplay of information propagation and online causal attributions are crucial. Are causal statements more likely to spread online and, if so, in which ways? What types of social media users are more or less likely to make causal statements? Will a user be more likely to make a causal statement if they have recently been exposed to one or more causal statements from other users? The topics of causal statements also bring forth important questions to be addressed: how timely are causal statements? Are certain topics always being discussed in causal statements? Are there causal topics that are very popular for only brief periods and then forgotten? Temporal dynamics of causal statements are also interesting: do time-of-day or time-of-year factors play a role in how causal statements are made? Our work here focused on a limited subset of causal statements, but more generally, these results may inform new methods for automatically detecting causal statements from unstructured, natural language text BIBREF17 . Better computational tools focused on causal statements are an important step towards further understanding misinformation campaigns and other online activities. Lastly, an important but deeply challenging open question is how, if it is even possible, to validate the accuracy of causal statements. Can causal statements be ranked by some confidence metric(s)? We hope to pursue these and other questions in future research. Parts-of-speech tagging depends on punctuation and casing, which we filtered in our data, so a study of how robust the POS algorithm is to punctuation and casing removal is important. We computed POS tags for the corpora with and without casing as well as with and without punctuation (which includes hashtags, links and at-symbols). Two tags mentioned in Fig. 1 B, NNPS and LS (which was not significant), were affected by punctuation removal. Otherwise, there is a strong correlation (Fig. 4 ) between Odds Ratios (causal vs. control) with punctuation and without punctuation, including casing and without casing ( $\rho = 0.71$ and $0.80$ , respectively), indicating the POS differences between the corpora were primarily not due to the removal of punctuation or casing. Acknowledgments We thank R. Gallagher for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. ISS-1447634.
Presence of only the exact unigrams 'caused', 'causing', or 'causes'
7348e781b2c3755b33df33f4f0cab4b94fcbeb9b
7348e781b2c3755b33df33f4f0cab4b94fcbeb9b_0
Q: Do they use expert annotations, crowdsourcing, or only automatic methods to analyze the corpora? Text: Introduction Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect. Causal inference is a crucial way that humans comprehend the world, and it has been a major focus of philosophy, statistics, mathematics, psychology, and the cognitive sciences. Philosophers such as Hume and Kant have long argued whether causality is a human-centric illusion or the discovery of a priori truth BIBREF10 , BIBREF11 . Causal inference in science is incredibly important, and researchers have developed statistical measures such as Granger causality BIBREF12 , mathematical and probabilistic frameworks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , and text mining procedures BIBREF17 , BIBREF18 , BIBREF19 to better infer causal influence from data. In the cognitive sciences, the famous perception experiments of Michotte et al. led to a long line of research exploring the cognitive biases that humans possess when attempting to link cause and effect BIBREF20 , BIBREF21 , BIBREF22 . How humans understand and communicate cause and effect relationships is complicated, and is influenced by language structure BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 and sentiment or valence BIBREF27 . A key finding is that the perceived emphasis or causal weight changes between the agent (the grammatical construct responsible for a cause) and the patient (the construct effected by the cause) depending on the types of verbs used to describe the cause and effect. Researchers have hypothesized BIBREF28 that this is because of the innate weighting property of the verbs in the English language that humans use to attribute causes and effects. Another finding is the role of a valence bias: the volume and intensity of causal reasoning may increase due to negative feedback or negative events BIBREF27 . Despite these long lines of research, causal attributions made via social media or online social networks have not been well studied. The goal of this paper is to explore the language and topics of causal statements in a large corpus of social media taken from Twitter. We hypothesize that language and sentiment biases play a significant role in these statements, and that tools from natural language processing and computational linguistics can be used to study them. We do not attempt to study the factual correctness of these statements or offer any degree of verification, nor do we exhaustively identify and extract all causal statements from these data. Instead, here we focus on statements that are with high certainty causal statements, with the goal to better understand key characteristics about causal statements that differ from everyday online communication. The rest of this paper is organized as follows: In Sec. "Materials and Methods" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. "Results" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. "Discussion" . Dataset, filtering, and corpus selection Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work. All document text was processed the same way. Punctuation, XML characters, and hyperlinks were removed, as were Twitter-specific “at-mentions” and “hashtags” (see also the Appendix). There is useful information here, but it is either not natural language text, or it is Twitter-specific, or both. Documents were broken into individual words (unigrams) on whitespace. Casing information was retained, as we will use it for our Named Entity analysis, but otherwise all words were considered lowercase only (see also the Appendix). Stemming BIBREF30 and lemmatization BIBREF31 were not performed. Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively. Tagging and corpus comparison Documents were further studied by annotating their unigrams with Parts-of-Speech (POS) and Named Entities (NE) tags. POS tagging was done using NLTK v3.1 BIBREF29 which implements an averaged perceptron classifier BIBREF32 trained on the Brown Corpus BIBREF33 . (POS tagging is affected by punctuation; we show in the Appendix that our results are relatively robust to the removal of punctuation.) POS tags denote the nouns, verbs, and other grammatical constructs present in a document. Named Entity Recognition (NER) was performed using the 4-class, distributional similarity tagger provided as part of the Stanford CoreNLP v3.6.0 toolkit BIBREF34 . NER aims to identify and classify proper words in a text. The NE classifications considered were: Organization, Location, Person, and Misc. The Stanford NER tagger uses a conditional random field model BIBREF35 trained on diverse sets of manually-tagged English-language data (CoNLL-2003) BIBREF34 . Conditional random fields allow dependencies between words so that `New York' and `New York Times', for example, are classified separately as a location and organization, respectively. These taggers are commonly used and often provide reasonably accurate results, but there is always potential ambiguity in written text and improving upon these methods remains an active area of research. Unigrams, POS, and NEs were compared between the cause and control corpora using odds ratios (ORs): $$\operatorname{OR}(x) = \frac{p_C(x)/ (1-p_C(x))}{p_N(x) / (1-p_N(x))},$$ (Eq. 1) where $p_C(x)$ and $p_N(x)$ are the probabilities that a unigram, POS, or NE $x$ occurs in the causal and control corpus, respectively. These probabilities were computed for each corpus separately as $p(x) = f(x) / \sum _{x^{\prime } \in V} f(x^{\prime })$ , where $f(x)$ is the total number of occurrences of $x$ in the corpus and $V$ is the relevant set of unigrams, POS, or NEs. Confidence intervals for the ORs were computed using Wald's methodology BIBREF36 . As there are many unique unigrams in the text, when computing unigram ORs we focused on the most meaningful unigrams within each corpus by using the following filtering criteria: we considered only the $\operatorname{OR}$ s of the 1500 most frequent unigrams in that corpus that also have a term-frequency-inverse-document-frequency (tf-idf) score above the 90th percentile for that corpus BIBREF37 . The tf-idf was computed as $$\mbox{tf-idf}(w) = \log f(w) \times \log \left(D̑{\mathit {df}(w)} \right) ,$$ (Eq. 2) where $D$ is the total number of documents in the corpus, and $\mathit {df}(w)$ is the number of documents in the corpus containing unigram $w$ . Intuitively, unigrams with higher tf-idf scores appear frequently, but are not so frequent that they are ubiquitous through all documents. Filtering via tf-idf is standard practice in the information retrieval and data mining fields. Cause-trees For a better understanding of the higher-order language structure present in text phrases, cause-trees were constructed. A cause-tree starts with a root cause word (either `caused', `causing' or `causes'), then the two most probable words following (preceding) the root are identified. Next, the root word plus one of the top probable words is combined into a bigram and the top two most probable words following (preceding) this bigram are found. Repeatedly applying this process builds a binary tree representing the $n$ -grams that begin with (terminate at) the root word. This process can continue until a certain $n$ -gram length is reached or until there are no more documents long enough to search. Sentiment analysis Sentimental analysis was applied to estimate the emotional content of documents. Two levels of analysis were used: a method where individual unigrams were given crowdsourced numeric sentiment scores, and a second method involving a trained classifier that can incorporate document-level phrase information. For the first sentiment analysis, each unigram $w$ was assigned a crowdsourced “labMT” sentiment score $s(w)$ BIBREF5 . (Unlike BIBREF5 , scores were recentered by subtracting the mean, $s(w) \leftarrow s(w)-\left<s\right>$ .) Unigrams determined by volunteer raters to have a negative emotional sentiment (`hate',`death', etc.) have $s(w) < 0$ , while unigrams determined to have a positive emotional sentiment (`love', `happy', etc.) tend to have $s(w) > 0$ . Unigrams that have labMT scores and are above the 90th percentile of tf-idf for the corpus form the set $\tilde{V}$ . (Unigrams in $\tilde{V}$ need not be among the 1500 most frequent unigrams.) The set $\tilde{V}$ captures 87.9% (91.5%) of total unigrams in the causal (control) corpus. Crucially, the tf-idf filtering ensures that the words `caused', `causes', and `causing', which have a slight negative sentiment, are not included and do not introduce a systematic bias when comparing the two corpora. This sentiment measure works on a per-unigram basis, and is therefore best suited for large bodies of text, not short documents BIBREF5 . Instead of considering individual documents, the distributions of labMT scores over all unigrams for each corpus was used to compare the corpora. In addition, a single sentiment score for each corpus was computed as the average sentiment score over all unigrams in that corpus, weighed by unigram frequency: $\sum _{w \in \tilde{V}} {f(w) s(w)} \Big / \sum _{w^{\prime } \in \tilde{V}} f(w^{\prime })$ . To supplement this sentiment analysis method, we applied a second method capable of estimating with reasonable accuracy the sentiment of individual documents. We used the sentiment classifier BIBREF38 included in the Stanford CoreNLP v3.6.0 toolkit to documents in each corpus. Documents were individually classified into one of five categories: very negative, negative, neutral, positive, very positive. The data used to train this classifier is taken from positive and negative reviews of movies (Stanford Sentiment Treebank v1.0) BIBREF38 . Topic modeling Lastly, we applied topic modeling to the causal corpus to determine what are the topical foci most discussed in causal statements. Topics were built from the causal corpus using Latent Dirichlet Allocation (LDA) BIBREF39 . Under LDA each document is modeled as a bag-of-words or unordered collection of unigrams. Topics are considered as mixtures of unigrams by estimating conditional distributions over unigrams: $P(w|T)$ , the probability of unigram $w$ given topic $T$ and documents are considered as mixtures of topics via $P(T|d)$ , the probability of topic $T$ given document $d$ . These distributions are then found via statistical inference given the observed distributions of unigrams across documents. The total number of topics is a parameter chosen by the practitioner. For this study we used the MALLET v2.0.8RC3 topic modeling toolkit BIBREF40 for model inference. By inspecting the most probable unigrams per topic (according to $P(w|T)$ ), we found 10 topics provided meaningful and distinct topics. Results We have collected approximately 1M causal statements made on Twitter over the course of 2013, and for a control we gathered the same number of statements selected at random but controlling for time of year (see Methods). We applied Parts-of-Speech (POS) and Named Entity (NE) taggers to all these texts. Some post-processed and tagged example documents, both causal and control, are shown in Fig. 1 A. We also applied sentiment analysis methods to these documents (Methods) and we have highlighted very positive and very negative words throughout Fig. 1 . In Fig. 1 B we present odds ratios for how frequently unigrams (words), POS, or NE appear in causal documents relative to control documents. The three unigrams most strongly skewed towards causal documents were `stress', `problems', and `trouble', while the three most skewed towards control documents were `photo', `ready', and `cute'. While these are only a small number of the unigrams present, this does imply a negative sentiment bias among causal statements (we return to this point shortly). Figure 1 B also presents odds ratios for POS tags, to help us measure the differences in grammatical structure between causal and control documents (see also the Appendix for the effects of punctuation and casing on these odds ratios). The causal corpus showed greater odds for plural nouns (Penn Treebank tag: NNS), plural proper nouns (NNPS), Wh-determiners/pronouns (WDT, WP$) such as `whichever',`whatever', `whose', or `whosever', and predeterminers (PDT) such as `all' or `both'. Predeterminers quantify noun phrases such as `all' in `after all the events that caused you tears', showing that many causal statements, despite the potential brevity of social media, can encompass or delineate classes of agents and/or patients. On the other hand, the causal corpus has lower odds than the control corpus for list items (LS), proper singular nouns (NNP), and interjections (UH). Lastly, Fig. 1 B contains odds ratios for NE tags, allowing us to quantify the types of proper nouns that are more or less likely to appear in causal statements. Of the four tags, only the “Person” tag is less likely in the causal corpus than the control. (This matches the odds ratio for the proper singular noun discussed above.) Perhaps surprisingly, these results together imply that causal statements are less likely to involve individual persons than non-causal statements. There is considerable celebrity news and gossip on social media BIBREF4 ; discussions of celebrities may not be especially focused on attributing causes to these celebrities. All other NE tags, Organization, Location, and Miscellaneous, occur more frequently in the causal corpus than the control. All the odds ratios in Fig. 1 B were significant at the $\alpha = 0.05$ level except the List item marker (LS) POS tag. The unigram analysis in Fig. 1 does not incorporate higher-order phrase structure present in written language. To explore these structures specifically in the causal corpus, we constructed “cause-trees”, shown in Fig. 2 . Inspired by association mining BIBREF41 , a cause-tree is a binary tree rooted at either `caused', `causes', or `causing', that illustrates the most frequently occurring $n$ -grams that either begin or end with that root cause word (see Methods for details). The “causes” tree shows the focused writing (sentence segments) that many people use to express either the relationship between their own actions and a cause-and-effect (“even if it causes”), or the uncontrollable effect a cause may have on themselves: “causes me to have” shows a person's inability to control a causal event (“[...] i have central heterochromia which causes me to have dual colors in both eyes”). The `causing' tree reveals our ability to confine causal patterns to specific areas, and also our ability to be affected by others causal decisions. Phrases like “causing a scene in/at” and “causing a ruckus in/at” (from documents like “causing a ruckus in the hotel lobby typical [...]”) show people commonly associate bounds on where causal actions take place. The causing tree also shows people's tendency to emphasize current negativity: Phrases like “pain this is causing” coming from documents like “cant you see the pain you are causing her” supports the sentiment bias that causal attribution is more likely for negative cause-effect associations. Finally, the `caused' tree focuses heavily on negative events and indicates people are more likely to remember negative causal events. Documents with phrases from the caused tree (“[...] appalling tragedy [...] that caused the death”, “[...] live with this pain that you caused when i was so young [...]”) exemplify the negative events that are focused on are large-scale tragedies or very personal negative events in one's life. Taken together, the popularity of negative sentiment unigrams (Fig. 1 ) and $n$ -grams (Fig. 2 ) among causal documents shows that emotional sentiment or “valence” may play a role in how people perform causal attribution BIBREF27 . The “if it bleeds, it leads” mentality among news media, where violent and negative news are more heavily reported, may appeal to this innate causal association mechanism. (On the other hand, many news media themselves use social media for reporting.) The prevalence of negative sentiment also contrasts with the “better angels of our nature” evidence of Pinker BIBREF42 , illustrating one bias that shows why many find the results of Ref. BIBREF42 surprising. Given this apparent sentiment skew, we further studied sentiment (Fig. 3 ). We compared the sentiment between the corpora in four different ways to investigate the observation (Figs. 1 B and 2 ) that people focus more about negative concepts when they discuss causality. First, we computed the mean sentiment score of each corpus using crowdsourced “labMT” scores weighted by unigram frequency (see Methods). We also applied tf-idf filtering (Methods) to exclude very common words, including the three cause-words, from the mean sentiment score. The causal corpus text was slightly negative on average while the control corpus was slightly positive (Fig. 3 A). The difference in mean sentiment score was significant (t-test: $p < 0.01$ ). Second, we moved from the mean score to the distribution of sentiment across all (scored) unigrams in the causal and control corpora (Fig. 3 B). The causal corpus contained a large group of negative sentiment unigrams, with labMT scores in the approximate range $-3 < s < -1/2$ ; the control corpus had significantly fewer unigrams in this score range. Third, in Fig. 3 C we used POS tags to categorize scored unigrams into nouns, verbs, and adjectives. Studying the distributions for each, we found that nouns explain much of the overall difference observed in Fig. 3 B, with verbs showing a similar but smaller difference between the two corpora. Adjectives showed little difference. The distributions in Fig. 3 C account for 87.8% of scored text in the causal corpus and 77.2% of the control corpus. The difference in sentiment between corpora was significant for all distributions (t-test: $p < 0.01$ ). Fourth, to further confirm that the causal documents tend toward negative sentiment, we applied a separate, independent sentiment analysis using the Stanford NLP sentiment toolkit BIBREF38 to classify the sentiment of individual documents not unigrams (see Methods). Instead of a numeric sentiment score, this classifier assigns documents to one of five categories ranging from very negative to very positive. The classifier showed that the causal corpus contains more negative and very negative documents than the control corpus, while the control corpus contains more neutral, positive, and very positive documents (Fig. 3 D). We have found language (Figs. 1 and 2 ) and sentiment (Fig. 3 ) differences between causal statements made on social media compared with other social media statements. But what is being discussed? What are the topical foci of causal statements? To study this, for our last analysis we applied topic models to the causal statements. Topic modeling finds groups of related terms (unigrams) by considering similarities between how those terms co-occur across a set of documents. We used the popular topic modeling method Latent Dirichlet Allocation (LDA) BIBREF39 . We ranked unigrams by how strongly associated they were with the topic. Inspecting these unigrams we found that a 10-topic model discovered meaningful topics. See Methods for full details. The top unigrams for each topic are shown in Tab. 1 . Topics in the causal corpus tend to fall into three main categories: (i) news, covering current events, weather, etc.; (ii) medicine and health, covering cancer, obesity, stress, etc.; and (iii) relationships, covering problems, stress, crisis, drama, sorry, etc. While the topics are quite different, they are all similar in their use of negative sentiment words. The negative/global features in the `news' topic are captured in the most representative words: damage, fire, power, etc. Similar to news, the `accident' topic balances the more frequent day-to-day minor frustrations with the less frequent but more severe impacts of car accidents. The words `traffic' and `delays' are the most probable words for this topic, and are common, low-impact occurrences. On the contrary, `crash', `car', `accident' and `death' are the next most probable words for the accident topic, and generally show a focus on less-common but higher-impact events. The `medical' topic also focused on negative words; highly probable words for this topic included `cancer', `break', `disease', `blood', etc. Meanwhile, the `body' topic contained words like: `stress', `lose', and `weight', giving a focus on on our more personal struggles with body image. Besides body image, the `injuries' topic uses specific pronouns (`his', `him', `her') in references to a person's own injuries or the injuries of others such as athletes. Aside from more factual information, social information is well represented in causal statements. The `problems' topic shows people attribute their problems to many others with terms like: `dont', `people', `they', `them'. The `stress' topic also uses general words such as `more', `than', or `people' to link stress to all people, and in the same vein, the `crisis' topic focuses on problems within organizations such as governments. The `drama' and `sorry' topics tend towards more specific causal statements. Drama used the words: `like', `she', and `her' while documents in the sorry topic tended to address other people. The topics of causal documents discovered by LDA showed that both general and specific statements are made regarding news, medicine, and relationships when individuals make causal attributions online. Discussion The power of online communication is the speed and ease with which information can be propagated by potentially any connected users. Yet these strengths come at a cost: rumors and misinformation also spread easily. Causal misattribution is at the heart of many rumors, conspiracy theories, and misinformation campaigns. Given the central role of causal statements, further studies of the interplay of information propagation and online causal attributions are crucial. Are causal statements more likely to spread online and, if so, in which ways? What types of social media users are more or less likely to make causal statements? Will a user be more likely to make a causal statement if they have recently been exposed to one or more causal statements from other users? The topics of causal statements also bring forth important questions to be addressed: how timely are causal statements? Are certain topics always being discussed in causal statements? Are there causal topics that are very popular for only brief periods and then forgotten? Temporal dynamics of causal statements are also interesting: do time-of-day or time-of-year factors play a role in how causal statements are made? Our work here focused on a limited subset of causal statements, but more generally, these results may inform new methods for automatically detecting causal statements from unstructured, natural language text BIBREF17 . Better computational tools focused on causal statements are an important step towards further understanding misinformation campaigns and other online activities. Lastly, an important but deeply challenging open question is how, if it is even possible, to validate the accuracy of causal statements. Can causal statements be ranked by some confidence metric(s)? We hope to pursue these and other questions in future research. Parts-of-speech tagging depends on punctuation and casing, which we filtered in our data, so a study of how robust the POS algorithm is to punctuation and casing removal is important. We computed POS tags for the corpora with and without casing as well as with and without punctuation (which includes hashtags, links and at-symbols). Two tags mentioned in Fig. 1 B, NNPS and LS (which was not significant), were affected by punctuation removal. Otherwise, there is a strong correlation (Fig. 4 ) between Odds Ratios (causal vs. control) with punctuation and without punctuation, including casing and without casing ( $\rho = 0.71$ and $0.80$ , respectively), indicating the POS differences between the corpora were primarily not due to the removal of punctuation or casing. Acknowledgments We thank R. Gallagher for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. ISS-1447634.
Only automatic methods
f68bd65b5251f86e1ed89f0c858a8bb2a02b233a
f68bd65b5251f86e1ed89f0c858a8bb2a02b233a_0
Q: how do they collect the comparable corpus? Text: Introduction Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect. Causal inference is a crucial way that humans comprehend the world, and it has been a major focus of philosophy, statistics, mathematics, psychology, and the cognitive sciences. Philosophers such as Hume and Kant have long argued whether causality is a human-centric illusion or the discovery of a priori truth BIBREF10 , BIBREF11 . Causal inference in science is incredibly important, and researchers have developed statistical measures such as Granger causality BIBREF12 , mathematical and probabilistic frameworks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , and text mining procedures BIBREF17 , BIBREF18 , BIBREF19 to better infer causal influence from data. In the cognitive sciences, the famous perception experiments of Michotte et al. led to a long line of research exploring the cognitive biases that humans possess when attempting to link cause and effect BIBREF20 , BIBREF21 , BIBREF22 . How humans understand and communicate cause and effect relationships is complicated, and is influenced by language structure BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 and sentiment or valence BIBREF27 . A key finding is that the perceived emphasis or causal weight changes between the agent (the grammatical construct responsible for a cause) and the patient (the construct effected by the cause) depending on the types of verbs used to describe the cause and effect. Researchers have hypothesized BIBREF28 that this is because of the innate weighting property of the verbs in the English language that humans use to attribute causes and effects. Another finding is the role of a valence bias: the volume and intensity of causal reasoning may increase due to negative feedback or negative events BIBREF27 . Despite these long lines of research, causal attributions made via social media or online social networks have not been well studied. The goal of this paper is to explore the language and topics of causal statements in a large corpus of social media taken from Twitter. We hypothesize that language and sentiment biases play a significant role in these statements, and that tools from natural language processing and computational linguistics can be used to study them. We do not attempt to study the factual correctness of these statements or offer any degree of verification, nor do we exhaustively identify and extract all causal statements from these data. Instead, here we focus on statements that are with high certainty causal statements, with the goal to better understand key characteristics about causal statements that differ from everyday online communication. The rest of this paper is organized as follows: In Sec. "Materials and Methods" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. "Results" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. "Discussion" . Dataset, filtering, and corpus selection Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work. All document text was processed the same way. Punctuation, XML characters, and hyperlinks were removed, as were Twitter-specific “at-mentions” and “hashtags” (see also the Appendix). There is useful information here, but it is either not natural language text, or it is Twitter-specific, or both. Documents were broken into individual words (unigrams) on whitespace. Casing information was retained, as we will use it for our Named Entity analysis, but otherwise all words were considered lowercase only (see also the Appendix). Stemming BIBREF30 and lemmatization BIBREF31 were not performed. Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively. Tagging and corpus comparison Documents were further studied by annotating their unigrams with Parts-of-Speech (POS) and Named Entities (NE) tags. POS tagging was done using NLTK v3.1 BIBREF29 which implements an averaged perceptron classifier BIBREF32 trained on the Brown Corpus BIBREF33 . (POS tagging is affected by punctuation; we show in the Appendix that our results are relatively robust to the removal of punctuation.) POS tags denote the nouns, verbs, and other grammatical constructs present in a document. Named Entity Recognition (NER) was performed using the 4-class, distributional similarity tagger provided as part of the Stanford CoreNLP v3.6.0 toolkit BIBREF34 . NER aims to identify and classify proper words in a text. The NE classifications considered were: Organization, Location, Person, and Misc. The Stanford NER tagger uses a conditional random field model BIBREF35 trained on diverse sets of manually-tagged English-language data (CoNLL-2003) BIBREF34 . Conditional random fields allow dependencies between words so that `New York' and `New York Times', for example, are classified separately as a location and organization, respectively. These taggers are commonly used and often provide reasonably accurate results, but there is always potential ambiguity in written text and improving upon these methods remains an active area of research. Unigrams, POS, and NEs were compared between the cause and control corpora using odds ratios (ORs): $$\operatorname{OR}(x) = \frac{p_C(x)/ (1-p_C(x))}{p_N(x) / (1-p_N(x))},$$ (Eq. 1) where $p_C(x)$ and $p_N(x)$ are the probabilities that a unigram, POS, or NE $x$ occurs in the causal and control corpus, respectively. These probabilities were computed for each corpus separately as $p(x) = f(x) / \sum _{x^{\prime } \in V} f(x^{\prime })$ , where $f(x)$ is the total number of occurrences of $x$ in the corpus and $V$ is the relevant set of unigrams, POS, or NEs. Confidence intervals for the ORs were computed using Wald's methodology BIBREF36 . As there are many unique unigrams in the text, when computing unigram ORs we focused on the most meaningful unigrams within each corpus by using the following filtering criteria: we considered only the $\operatorname{OR}$ s of the 1500 most frequent unigrams in that corpus that also have a term-frequency-inverse-document-frequency (tf-idf) score above the 90th percentile for that corpus BIBREF37 . The tf-idf was computed as $$\mbox{tf-idf}(w) = \log f(w) \times \log \left(D̑{\mathit {df}(w)} \right) ,$$ (Eq. 2) where $D$ is the total number of documents in the corpus, and $\mathit {df}(w)$ is the number of documents in the corpus containing unigram $w$ . Intuitively, unigrams with higher tf-idf scores appear frequently, but are not so frequent that they are ubiquitous through all documents. Filtering via tf-idf is standard practice in the information retrieval and data mining fields. Cause-trees For a better understanding of the higher-order language structure present in text phrases, cause-trees were constructed. A cause-tree starts with a root cause word (either `caused', `causing' or `causes'), then the two most probable words following (preceding) the root are identified. Next, the root word plus one of the top probable words is combined into a bigram and the top two most probable words following (preceding) this bigram are found. Repeatedly applying this process builds a binary tree representing the $n$ -grams that begin with (terminate at) the root word. This process can continue until a certain $n$ -gram length is reached or until there are no more documents long enough to search. Sentiment analysis Sentimental analysis was applied to estimate the emotional content of documents. Two levels of analysis were used: a method where individual unigrams were given crowdsourced numeric sentiment scores, and a second method involving a trained classifier that can incorporate document-level phrase information. For the first sentiment analysis, each unigram $w$ was assigned a crowdsourced “labMT” sentiment score $s(w)$ BIBREF5 . (Unlike BIBREF5 , scores were recentered by subtracting the mean, $s(w) \leftarrow s(w)-\left<s\right>$ .) Unigrams determined by volunteer raters to have a negative emotional sentiment (`hate',`death', etc.) have $s(w) < 0$ , while unigrams determined to have a positive emotional sentiment (`love', `happy', etc.) tend to have $s(w) > 0$ . Unigrams that have labMT scores and are above the 90th percentile of tf-idf for the corpus form the set $\tilde{V}$ . (Unigrams in $\tilde{V}$ need not be among the 1500 most frequent unigrams.) The set $\tilde{V}$ captures 87.9% (91.5%) of total unigrams in the causal (control) corpus. Crucially, the tf-idf filtering ensures that the words `caused', `causes', and `causing', which have a slight negative sentiment, are not included and do not introduce a systematic bias when comparing the two corpora. This sentiment measure works on a per-unigram basis, and is therefore best suited for large bodies of text, not short documents BIBREF5 . Instead of considering individual documents, the distributions of labMT scores over all unigrams for each corpus was used to compare the corpora. In addition, a single sentiment score for each corpus was computed as the average sentiment score over all unigrams in that corpus, weighed by unigram frequency: $\sum _{w \in \tilde{V}} {f(w) s(w)} \Big / \sum _{w^{\prime } \in \tilde{V}} f(w^{\prime })$ . To supplement this sentiment analysis method, we applied a second method capable of estimating with reasonable accuracy the sentiment of individual documents. We used the sentiment classifier BIBREF38 included in the Stanford CoreNLP v3.6.0 toolkit to documents in each corpus. Documents were individually classified into one of five categories: very negative, negative, neutral, positive, very positive. The data used to train this classifier is taken from positive and negative reviews of movies (Stanford Sentiment Treebank v1.0) BIBREF38 . Topic modeling Lastly, we applied topic modeling to the causal corpus to determine what are the topical foci most discussed in causal statements. Topics were built from the causal corpus using Latent Dirichlet Allocation (LDA) BIBREF39 . Under LDA each document is modeled as a bag-of-words or unordered collection of unigrams. Topics are considered as mixtures of unigrams by estimating conditional distributions over unigrams: $P(w|T)$ , the probability of unigram $w$ given topic $T$ and documents are considered as mixtures of topics via $P(T|d)$ , the probability of topic $T$ given document $d$ . These distributions are then found via statistical inference given the observed distributions of unigrams across documents. The total number of topics is a parameter chosen by the practitioner. For this study we used the MALLET v2.0.8RC3 topic modeling toolkit BIBREF40 for model inference. By inspecting the most probable unigrams per topic (according to $P(w|T)$ ), we found 10 topics provided meaningful and distinct topics. Results We have collected approximately 1M causal statements made on Twitter over the course of 2013, and for a control we gathered the same number of statements selected at random but controlling for time of year (see Methods). We applied Parts-of-Speech (POS) and Named Entity (NE) taggers to all these texts. Some post-processed and tagged example documents, both causal and control, are shown in Fig. 1 A. We also applied sentiment analysis methods to these documents (Methods) and we have highlighted very positive and very negative words throughout Fig. 1 . In Fig. 1 B we present odds ratios for how frequently unigrams (words), POS, or NE appear in causal documents relative to control documents. The three unigrams most strongly skewed towards causal documents were `stress', `problems', and `trouble', while the three most skewed towards control documents were `photo', `ready', and `cute'. While these are only a small number of the unigrams present, this does imply a negative sentiment bias among causal statements (we return to this point shortly). Figure 1 B also presents odds ratios for POS tags, to help us measure the differences in grammatical structure between causal and control documents (see also the Appendix for the effects of punctuation and casing on these odds ratios). The causal corpus showed greater odds for plural nouns (Penn Treebank tag: NNS), plural proper nouns (NNPS), Wh-determiners/pronouns (WDT, WP$) such as `whichever',`whatever', `whose', or `whosever', and predeterminers (PDT) such as `all' or `both'. Predeterminers quantify noun phrases such as `all' in `after all the events that caused you tears', showing that many causal statements, despite the potential brevity of social media, can encompass or delineate classes of agents and/or patients. On the other hand, the causal corpus has lower odds than the control corpus for list items (LS), proper singular nouns (NNP), and interjections (UH). Lastly, Fig. 1 B contains odds ratios for NE tags, allowing us to quantify the types of proper nouns that are more or less likely to appear in causal statements. Of the four tags, only the “Person” tag is less likely in the causal corpus than the control. (This matches the odds ratio for the proper singular noun discussed above.) Perhaps surprisingly, these results together imply that causal statements are less likely to involve individual persons than non-causal statements. There is considerable celebrity news and gossip on social media BIBREF4 ; discussions of celebrities may not be especially focused on attributing causes to these celebrities. All other NE tags, Organization, Location, and Miscellaneous, occur more frequently in the causal corpus than the control. All the odds ratios in Fig. 1 B were significant at the $\alpha = 0.05$ level except the List item marker (LS) POS tag. The unigram analysis in Fig. 1 does not incorporate higher-order phrase structure present in written language. To explore these structures specifically in the causal corpus, we constructed “cause-trees”, shown in Fig. 2 . Inspired by association mining BIBREF41 , a cause-tree is a binary tree rooted at either `caused', `causes', or `causing', that illustrates the most frequently occurring $n$ -grams that either begin or end with that root cause word (see Methods for details). The “causes” tree shows the focused writing (sentence segments) that many people use to express either the relationship between their own actions and a cause-and-effect (“even if it causes”), or the uncontrollable effect a cause may have on themselves: “causes me to have” shows a person's inability to control a causal event (“[...] i have central heterochromia which causes me to have dual colors in both eyes”). The `causing' tree reveals our ability to confine causal patterns to specific areas, and also our ability to be affected by others causal decisions. Phrases like “causing a scene in/at” and “causing a ruckus in/at” (from documents like “causing a ruckus in the hotel lobby typical [...]”) show people commonly associate bounds on where causal actions take place. The causing tree also shows people's tendency to emphasize current negativity: Phrases like “pain this is causing” coming from documents like “cant you see the pain you are causing her” supports the sentiment bias that causal attribution is more likely for negative cause-effect associations. Finally, the `caused' tree focuses heavily on negative events and indicates people are more likely to remember negative causal events. Documents with phrases from the caused tree (“[...] appalling tragedy [...] that caused the death”, “[...] live with this pain that you caused when i was so young [...]”) exemplify the negative events that are focused on are large-scale tragedies or very personal negative events in one's life. Taken together, the popularity of negative sentiment unigrams (Fig. 1 ) and $n$ -grams (Fig. 2 ) among causal documents shows that emotional sentiment or “valence” may play a role in how people perform causal attribution BIBREF27 . The “if it bleeds, it leads” mentality among news media, where violent and negative news are more heavily reported, may appeal to this innate causal association mechanism. (On the other hand, many news media themselves use social media for reporting.) The prevalence of negative sentiment also contrasts with the “better angels of our nature” evidence of Pinker BIBREF42 , illustrating one bias that shows why many find the results of Ref. BIBREF42 surprising. Given this apparent sentiment skew, we further studied sentiment (Fig. 3 ). We compared the sentiment between the corpora in four different ways to investigate the observation (Figs. 1 B and 2 ) that people focus more about negative concepts when they discuss causality. First, we computed the mean sentiment score of each corpus using crowdsourced “labMT” scores weighted by unigram frequency (see Methods). We also applied tf-idf filtering (Methods) to exclude very common words, including the three cause-words, from the mean sentiment score. The causal corpus text was slightly negative on average while the control corpus was slightly positive (Fig. 3 A). The difference in mean sentiment score was significant (t-test: $p < 0.01$ ). Second, we moved from the mean score to the distribution of sentiment across all (scored) unigrams in the causal and control corpora (Fig. 3 B). The causal corpus contained a large group of negative sentiment unigrams, with labMT scores in the approximate range $-3 < s < -1/2$ ; the control corpus had significantly fewer unigrams in this score range. Third, in Fig. 3 C we used POS tags to categorize scored unigrams into nouns, verbs, and adjectives. Studying the distributions for each, we found that nouns explain much of the overall difference observed in Fig. 3 B, with verbs showing a similar but smaller difference between the two corpora. Adjectives showed little difference. The distributions in Fig. 3 C account for 87.8% of scored text in the causal corpus and 77.2% of the control corpus. The difference in sentiment between corpora was significant for all distributions (t-test: $p < 0.01$ ). Fourth, to further confirm that the causal documents tend toward negative sentiment, we applied a separate, independent sentiment analysis using the Stanford NLP sentiment toolkit BIBREF38 to classify the sentiment of individual documents not unigrams (see Methods). Instead of a numeric sentiment score, this classifier assigns documents to one of five categories ranging from very negative to very positive. The classifier showed that the causal corpus contains more negative and very negative documents than the control corpus, while the control corpus contains more neutral, positive, and very positive documents (Fig. 3 D). We have found language (Figs. 1 and 2 ) and sentiment (Fig. 3 ) differences between causal statements made on social media compared with other social media statements. But what is being discussed? What are the topical foci of causal statements? To study this, for our last analysis we applied topic models to the causal statements. Topic modeling finds groups of related terms (unigrams) by considering similarities between how those terms co-occur across a set of documents. We used the popular topic modeling method Latent Dirichlet Allocation (LDA) BIBREF39 . We ranked unigrams by how strongly associated they were with the topic. Inspecting these unigrams we found that a 10-topic model discovered meaningful topics. See Methods for full details. The top unigrams for each topic are shown in Tab. 1 . Topics in the causal corpus tend to fall into three main categories: (i) news, covering current events, weather, etc.; (ii) medicine and health, covering cancer, obesity, stress, etc.; and (iii) relationships, covering problems, stress, crisis, drama, sorry, etc. While the topics are quite different, they are all similar in their use of negative sentiment words. The negative/global features in the `news' topic are captured in the most representative words: damage, fire, power, etc. Similar to news, the `accident' topic balances the more frequent day-to-day minor frustrations with the less frequent but more severe impacts of car accidents. The words `traffic' and `delays' are the most probable words for this topic, and are common, low-impact occurrences. On the contrary, `crash', `car', `accident' and `death' are the next most probable words for the accident topic, and generally show a focus on less-common but higher-impact events. The `medical' topic also focused on negative words; highly probable words for this topic included `cancer', `break', `disease', `blood', etc. Meanwhile, the `body' topic contained words like: `stress', `lose', and `weight', giving a focus on on our more personal struggles with body image. Besides body image, the `injuries' topic uses specific pronouns (`his', `him', `her') in references to a person's own injuries or the injuries of others such as athletes. Aside from more factual information, social information is well represented in causal statements. The `problems' topic shows people attribute their problems to many others with terms like: `dont', `people', `they', `them'. The `stress' topic also uses general words such as `more', `than', or `people' to link stress to all people, and in the same vein, the `crisis' topic focuses on problems within organizations such as governments. The `drama' and `sorry' topics tend towards more specific causal statements. Drama used the words: `like', `she', and `her' while documents in the sorry topic tended to address other people. The topics of causal documents discovered by LDA showed that both general and specific statements are made regarding news, medicine, and relationships when individuals make causal attributions online. Discussion The power of online communication is the speed and ease with which information can be propagated by potentially any connected users. Yet these strengths come at a cost: rumors and misinformation also spread easily. Causal misattribution is at the heart of many rumors, conspiracy theories, and misinformation campaigns. Given the central role of causal statements, further studies of the interplay of information propagation and online causal attributions are crucial. Are causal statements more likely to spread online and, if so, in which ways? What types of social media users are more or less likely to make causal statements? Will a user be more likely to make a causal statement if they have recently been exposed to one or more causal statements from other users? The topics of causal statements also bring forth important questions to be addressed: how timely are causal statements? Are certain topics always being discussed in causal statements? Are there causal topics that are very popular for only brief periods and then forgotten? Temporal dynamics of causal statements are also interesting: do time-of-day or time-of-year factors play a role in how causal statements are made? Our work here focused on a limited subset of causal statements, but more generally, these results may inform new methods for automatically detecting causal statements from unstructured, natural language text BIBREF17 . Better computational tools focused on causal statements are an important step towards further understanding misinformation campaigns and other online activities. Lastly, an important but deeply challenging open question is how, if it is even possible, to validate the accuracy of causal statements. Can causal statements be ranked by some confidence metric(s)? We hope to pursue these and other questions in future research. Parts-of-speech tagging depends on punctuation and casing, which we filtered in our data, so a study of how robust the POS algorithm is to punctuation and casing removal is important. We computed POS tags for the corpora with and without casing as well as with and without punctuation (which includes hashtags, links and at-symbols). Two tags mentioned in Fig. 1 B, NNPS and LS (which was not significant), were affected by punctuation removal. Otherwise, there is a strong correlation (Fig. 4 ) between Odds Ratios (causal vs. control) with punctuation and without punctuation, including casing and without casing ( $\rho = 0.71$ and $0.80$ , respectively), indicating the POS differences between the corpora were primarily not due to the removal of punctuation or casing. Acknowledgments We thank R. Gallagher for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. ISS-1447634.
Randomly from a Twitter dump
e111925a82bad50f8e83da274988b9bea8b90005
e111925a82bad50f8e83da274988b9bea8b90005_0
Q: How do they collect the control corpus? Text: Introduction Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect. Causal inference is a crucial way that humans comprehend the world, and it has been a major focus of philosophy, statistics, mathematics, psychology, and the cognitive sciences. Philosophers such as Hume and Kant have long argued whether causality is a human-centric illusion or the discovery of a priori truth BIBREF10 , BIBREF11 . Causal inference in science is incredibly important, and researchers have developed statistical measures such as Granger causality BIBREF12 , mathematical and probabilistic frameworks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , and text mining procedures BIBREF17 , BIBREF18 , BIBREF19 to better infer causal influence from data. In the cognitive sciences, the famous perception experiments of Michotte et al. led to a long line of research exploring the cognitive biases that humans possess when attempting to link cause and effect BIBREF20 , BIBREF21 , BIBREF22 . How humans understand and communicate cause and effect relationships is complicated, and is influenced by language structure BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 and sentiment or valence BIBREF27 . A key finding is that the perceived emphasis or causal weight changes between the agent (the grammatical construct responsible for a cause) and the patient (the construct effected by the cause) depending on the types of verbs used to describe the cause and effect. Researchers have hypothesized BIBREF28 that this is because of the innate weighting property of the verbs in the English language that humans use to attribute causes and effects. Another finding is the role of a valence bias: the volume and intensity of causal reasoning may increase due to negative feedback or negative events BIBREF27 . Despite these long lines of research, causal attributions made via social media or online social networks have not been well studied. The goal of this paper is to explore the language and topics of causal statements in a large corpus of social media taken from Twitter. We hypothesize that language and sentiment biases play a significant role in these statements, and that tools from natural language processing and computational linguistics can be used to study them. We do not attempt to study the factual correctness of these statements or offer any degree of verification, nor do we exhaustively identify and extract all causal statements from these data. Instead, here we focus on statements that are with high certainty causal statements, with the goal to better understand key characteristics about causal statements that differ from everyday online communication. The rest of this paper is organized as follows: In Sec. "Materials and Methods" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. "Results" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. "Discussion" . Dataset, filtering, and corpus selection Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work. All document text was processed the same way. Punctuation, XML characters, and hyperlinks were removed, as were Twitter-specific “at-mentions” and “hashtags” (see also the Appendix). There is useful information here, but it is either not natural language text, or it is Twitter-specific, or both. Documents were broken into individual words (unigrams) on whitespace. Casing information was retained, as we will use it for our Named Entity analysis, but otherwise all words were considered lowercase only (see also the Appendix). Stemming BIBREF30 and lemmatization BIBREF31 were not performed. Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively. Tagging and corpus comparison Documents were further studied by annotating their unigrams with Parts-of-Speech (POS) and Named Entities (NE) tags. POS tagging was done using NLTK v3.1 BIBREF29 which implements an averaged perceptron classifier BIBREF32 trained on the Brown Corpus BIBREF33 . (POS tagging is affected by punctuation; we show in the Appendix that our results are relatively robust to the removal of punctuation.) POS tags denote the nouns, verbs, and other grammatical constructs present in a document. Named Entity Recognition (NER) was performed using the 4-class, distributional similarity tagger provided as part of the Stanford CoreNLP v3.6.0 toolkit BIBREF34 . NER aims to identify and classify proper words in a text. The NE classifications considered were: Organization, Location, Person, and Misc. The Stanford NER tagger uses a conditional random field model BIBREF35 trained on diverse sets of manually-tagged English-language data (CoNLL-2003) BIBREF34 . Conditional random fields allow dependencies between words so that `New York' and `New York Times', for example, are classified separately as a location and organization, respectively. These taggers are commonly used and often provide reasonably accurate results, but there is always potential ambiguity in written text and improving upon these methods remains an active area of research. Unigrams, POS, and NEs were compared between the cause and control corpora using odds ratios (ORs): $$\operatorname{OR}(x) = \frac{p_C(x)/ (1-p_C(x))}{p_N(x) / (1-p_N(x))},$$ (Eq. 1) where $p_C(x)$ and $p_N(x)$ are the probabilities that a unigram, POS, or NE $x$ occurs in the causal and control corpus, respectively. These probabilities were computed for each corpus separately as $p(x) = f(x) / \sum _{x^{\prime } \in V} f(x^{\prime })$ , where $f(x)$ is the total number of occurrences of $x$ in the corpus and $V$ is the relevant set of unigrams, POS, or NEs. Confidence intervals for the ORs were computed using Wald's methodology BIBREF36 . As there are many unique unigrams in the text, when computing unigram ORs we focused on the most meaningful unigrams within each corpus by using the following filtering criteria: we considered only the $\operatorname{OR}$ s of the 1500 most frequent unigrams in that corpus that also have a term-frequency-inverse-document-frequency (tf-idf) score above the 90th percentile for that corpus BIBREF37 . The tf-idf was computed as $$\mbox{tf-idf}(w) = \log f(w) \times \log \left(D̑{\mathit {df}(w)} \right) ,$$ (Eq. 2) where $D$ is the total number of documents in the corpus, and $\mathit {df}(w)$ is the number of documents in the corpus containing unigram $w$ . Intuitively, unigrams with higher tf-idf scores appear frequently, but are not so frequent that they are ubiquitous through all documents. Filtering via tf-idf is standard practice in the information retrieval and data mining fields. Cause-trees For a better understanding of the higher-order language structure present in text phrases, cause-trees were constructed. A cause-tree starts with a root cause word (either `caused', `causing' or `causes'), then the two most probable words following (preceding) the root are identified. Next, the root word plus one of the top probable words is combined into a bigram and the top two most probable words following (preceding) this bigram are found. Repeatedly applying this process builds a binary tree representing the $n$ -grams that begin with (terminate at) the root word. This process can continue until a certain $n$ -gram length is reached or until there are no more documents long enough to search. Sentiment analysis Sentimental analysis was applied to estimate the emotional content of documents. Two levels of analysis were used: a method where individual unigrams were given crowdsourced numeric sentiment scores, and a second method involving a trained classifier that can incorporate document-level phrase information. For the first sentiment analysis, each unigram $w$ was assigned a crowdsourced “labMT” sentiment score $s(w)$ BIBREF5 . (Unlike BIBREF5 , scores were recentered by subtracting the mean, $s(w) \leftarrow s(w)-\left<s\right>$ .) Unigrams determined by volunteer raters to have a negative emotional sentiment (`hate',`death', etc.) have $s(w) < 0$ , while unigrams determined to have a positive emotional sentiment (`love', `happy', etc.) tend to have $s(w) > 0$ . Unigrams that have labMT scores and are above the 90th percentile of tf-idf for the corpus form the set $\tilde{V}$ . (Unigrams in $\tilde{V}$ need not be among the 1500 most frequent unigrams.) The set $\tilde{V}$ captures 87.9% (91.5%) of total unigrams in the causal (control) corpus. Crucially, the tf-idf filtering ensures that the words `caused', `causes', and `causing', which have a slight negative sentiment, are not included and do not introduce a systematic bias when comparing the two corpora. This sentiment measure works on a per-unigram basis, and is therefore best suited for large bodies of text, not short documents BIBREF5 . Instead of considering individual documents, the distributions of labMT scores over all unigrams for each corpus was used to compare the corpora. In addition, a single sentiment score for each corpus was computed as the average sentiment score over all unigrams in that corpus, weighed by unigram frequency: $\sum _{w \in \tilde{V}} {f(w) s(w)} \Big / \sum _{w^{\prime } \in \tilde{V}} f(w^{\prime })$ . To supplement this sentiment analysis method, we applied a second method capable of estimating with reasonable accuracy the sentiment of individual documents. We used the sentiment classifier BIBREF38 included in the Stanford CoreNLP v3.6.0 toolkit to documents in each corpus. Documents were individually classified into one of five categories: very negative, negative, neutral, positive, very positive. The data used to train this classifier is taken from positive and negative reviews of movies (Stanford Sentiment Treebank v1.0) BIBREF38 . Topic modeling Lastly, we applied topic modeling to the causal corpus to determine what are the topical foci most discussed in causal statements. Topics were built from the causal corpus using Latent Dirichlet Allocation (LDA) BIBREF39 . Under LDA each document is modeled as a bag-of-words or unordered collection of unigrams. Topics are considered as mixtures of unigrams by estimating conditional distributions over unigrams: $P(w|T)$ , the probability of unigram $w$ given topic $T$ and documents are considered as mixtures of topics via $P(T|d)$ , the probability of topic $T$ given document $d$ . These distributions are then found via statistical inference given the observed distributions of unigrams across documents. The total number of topics is a parameter chosen by the practitioner. For this study we used the MALLET v2.0.8RC3 topic modeling toolkit BIBREF40 for model inference. By inspecting the most probable unigrams per topic (according to $P(w|T)$ ), we found 10 topics provided meaningful and distinct topics. Results We have collected approximately 1M causal statements made on Twitter over the course of 2013, and for a control we gathered the same number of statements selected at random but controlling for time of year (see Methods). We applied Parts-of-Speech (POS) and Named Entity (NE) taggers to all these texts. Some post-processed and tagged example documents, both causal and control, are shown in Fig. 1 A. We also applied sentiment analysis methods to these documents (Methods) and we have highlighted very positive and very negative words throughout Fig. 1 . In Fig. 1 B we present odds ratios for how frequently unigrams (words), POS, or NE appear in causal documents relative to control documents. The three unigrams most strongly skewed towards causal documents were `stress', `problems', and `trouble', while the three most skewed towards control documents were `photo', `ready', and `cute'. While these are only a small number of the unigrams present, this does imply a negative sentiment bias among causal statements (we return to this point shortly). Figure 1 B also presents odds ratios for POS tags, to help us measure the differences in grammatical structure between causal and control documents (see also the Appendix for the effects of punctuation and casing on these odds ratios). The causal corpus showed greater odds for plural nouns (Penn Treebank tag: NNS), plural proper nouns (NNPS), Wh-determiners/pronouns (WDT, WP$) such as `whichever',`whatever', `whose', or `whosever', and predeterminers (PDT) such as `all' or `both'. Predeterminers quantify noun phrases such as `all' in `after all the events that caused you tears', showing that many causal statements, despite the potential brevity of social media, can encompass or delineate classes of agents and/or patients. On the other hand, the causal corpus has lower odds than the control corpus for list items (LS), proper singular nouns (NNP), and interjections (UH). Lastly, Fig. 1 B contains odds ratios for NE tags, allowing us to quantify the types of proper nouns that are more or less likely to appear in causal statements. Of the four tags, only the “Person” tag is less likely in the causal corpus than the control. (This matches the odds ratio for the proper singular noun discussed above.) Perhaps surprisingly, these results together imply that causal statements are less likely to involve individual persons than non-causal statements. There is considerable celebrity news and gossip on social media BIBREF4 ; discussions of celebrities may not be especially focused on attributing causes to these celebrities. All other NE tags, Organization, Location, and Miscellaneous, occur more frequently in the causal corpus than the control. All the odds ratios in Fig. 1 B were significant at the $\alpha = 0.05$ level except the List item marker (LS) POS tag. The unigram analysis in Fig. 1 does not incorporate higher-order phrase structure present in written language. To explore these structures specifically in the causal corpus, we constructed “cause-trees”, shown in Fig. 2 . Inspired by association mining BIBREF41 , a cause-tree is a binary tree rooted at either `caused', `causes', or `causing', that illustrates the most frequently occurring $n$ -grams that either begin or end with that root cause word (see Methods for details). The “causes” tree shows the focused writing (sentence segments) that many people use to express either the relationship between their own actions and a cause-and-effect (“even if it causes”), or the uncontrollable effect a cause may have on themselves: “causes me to have” shows a person's inability to control a causal event (“[...] i have central heterochromia which causes me to have dual colors in both eyes”). The `causing' tree reveals our ability to confine causal patterns to specific areas, and also our ability to be affected by others causal decisions. Phrases like “causing a scene in/at” and “causing a ruckus in/at” (from documents like “causing a ruckus in the hotel lobby typical [...]”) show people commonly associate bounds on where causal actions take place. The causing tree also shows people's tendency to emphasize current negativity: Phrases like “pain this is causing” coming from documents like “cant you see the pain you are causing her” supports the sentiment bias that causal attribution is more likely for negative cause-effect associations. Finally, the `caused' tree focuses heavily on negative events and indicates people are more likely to remember negative causal events. Documents with phrases from the caused tree (“[...] appalling tragedy [...] that caused the death”, “[...] live with this pain that you caused when i was so young [...]”) exemplify the negative events that are focused on are large-scale tragedies or very personal negative events in one's life. Taken together, the popularity of negative sentiment unigrams (Fig. 1 ) and $n$ -grams (Fig. 2 ) among causal documents shows that emotional sentiment or “valence” may play a role in how people perform causal attribution BIBREF27 . The “if it bleeds, it leads” mentality among news media, where violent and negative news are more heavily reported, may appeal to this innate causal association mechanism. (On the other hand, many news media themselves use social media for reporting.) The prevalence of negative sentiment also contrasts with the “better angels of our nature” evidence of Pinker BIBREF42 , illustrating one bias that shows why many find the results of Ref. BIBREF42 surprising. Given this apparent sentiment skew, we further studied sentiment (Fig. 3 ). We compared the sentiment between the corpora in four different ways to investigate the observation (Figs. 1 B and 2 ) that people focus more about negative concepts when they discuss causality. First, we computed the mean sentiment score of each corpus using crowdsourced “labMT” scores weighted by unigram frequency (see Methods). We also applied tf-idf filtering (Methods) to exclude very common words, including the three cause-words, from the mean sentiment score. The causal corpus text was slightly negative on average while the control corpus was slightly positive (Fig. 3 A). The difference in mean sentiment score was significant (t-test: $p < 0.01$ ). Second, we moved from the mean score to the distribution of sentiment across all (scored) unigrams in the causal and control corpora (Fig. 3 B). The causal corpus contained a large group of negative sentiment unigrams, with labMT scores in the approximate range $-3 < s < -1/2$ ; the control corpus had significantly fewer unigrams in this score range. Third, in Fig. 3 C we used POS tags to categorize scored unigrams into nouns, verbs, and adjectives. Studying the distributions for each, we found that nouns explain much of the overall difference observed in Fig. 3 B, with verbs showing a similar but smaller difference between the two corpora. Adjectives showed little difference. The distributions in Fig. 3 C account for 87.8% of scored text in the causal corpus and 77.2% of the control corpus. The difference in sentiment between corpora was significant for all distributions (t-test: $p < 0.01$ ). Fourth, to further confirm that the causal documents tend toward negative sentiment, we applied a separate, independent sentiment analysis using the Stanford NLP sentiment toolkit BIBREF38 to classify the sentiment of individual documents not unigrams (see Methods). Instead of a numeric sentiment score, this classifier assigns documents to one of five categories ranging from very negative to very positive. The classifier showed that the causal corpus contains more negative and very negative documents than the control corpus, while the control corpus contains more neutral, positive, and very positive documents (Fig. 3 D). We have found language (Figs. 1 and 2 ) and sentiment (Fig. 3 ) differences between causal statements made on social media compared with other social media statements. But what is being discussed? What are the topical foci of causal statements? To study this, for our last analysis we applied topic models to the causal statements. Topic modeling finds groups of related terms (unigrams) by considering similarities between how those terms co-occur across a set of documents. We used the popular topic modeling method Latent Dirichlet Allocation (LDA) BIBREF39 . We ranked unigrams by how strongly associated they were with the topic. Inspecting these unigrams we found that a 10-topic model discovered meaningful topics. See Methods for full details. The top unigrams for each topic are shown in Tab. 1 . Topics in the causal corpus tend to fall into three main categories: (i) news, covering current events, weather, etc.; (ii) medicine and health, covering cancer, obesity, stress, etc.; and (iii) relationships, covering problems, stress, crisis, drama, sorry, etc. While the topics are quite different, they are all similar in their use of negative sentiment words. The negative/global features in the `news' topic are captured in the most representative words: damage, fire, power, etc. Similar to news, the `accident' topic balances the more frequent day-to-day minor frustrations with the less frequent but more severe impacts of car accidents. The words `traffic' and `delays' are the most probable words for this topic, and are common, low-impact occurrences. On the contrary, `crash', `car', `accident' and `death' are the next most probable words for the accident topic, and generally show a focus on less-common but higher-impact events. The `medical' topic also focused on negative words; highly probable words for this topic included `cancer', `break', `disease', `blood', etc. Meanwhile, the `body' topic contained words like: `stress', `lose', and `weight', giving a focus on on our more personal struggles with body image. Besides body image, the `injuries' topic uses specific pronouns (`his', `him', `her') in references to a person's own injuries or the injuries of others such as athletes. Aside from more factual information, social information is well represented in causal statements. The `problems' topic shows people attribute their problems to many others with terms like: `dont', `people', `they', `them'. The `stress' topic also uses general words such as `more', `than', or `people' to link stress to all people, and in the same vein, the `crisis' topic focuses on problems within organizations such as governments. The `drama' and `sorry' topics tend towards more specific causal statements. Drama used the words: `like', `she', and `her' while documents in the sorry topic tended to address other people. The topics of causal documents discovered by LDA showed that both general and specific statements are made regarding news, medicine, and relationships when individuals make causal attributions online. Discussion The power of online communication is the speed and ease with which information can be propagated by potentially any connected users. Yet these strengths come at a cost: rumors and misinformation also spread easily. Causal misattribution is at the heart of many rumors, conspiracy theories, and misinformation campaigns. Given the central role of causal statements, further studies of the interplay of information propagation and online causal attributions are crucial. Are causal statements more likely to spread online and, if so, in which ways? What types of social media users are more or less likely to make causal statements? Will a user be more likely to make a causal statement if they have recently been exposed to one or more causal statements from other users? The topics of causal statements also bring forth important questions to be addressed: how timely are causal statements? Are certain topics always being discussed in causal statements? Are there causal topics that are very popular for only brief periods and then forgotten? Temporal dynamics of causal statements are also interesting: do time-of-day or time-of-year factors play a role in how causal statements are made? Our work here focused on a limited subset of causal statements, but more generally, these results may inform new methods for automatically detecting causal statements from unstructured, natural language text BIBREF17 . Better computational tools focused on causal statements are an important step towards further understanding misinformation campaigns and other online activities. Lastly, an important but deeply challenging open question is how, if it is even possible, to validate the accuracy of causal statements. Can causal statements be ranked by some confidence metric(s)? We hope to pursue these and other questions in future research. Parts-of-speech tagging depends on punctuation and casing, which we filtered in our data, so a study of how robust the POS algorithm is to punctuation and casing removal is important. We computed POS tags for the corpora with and without casing as well as with and without punctuation (which includes hashtags, links and at-symbols). Two tags mentioned in Fig. 1 B, NNPS and LS (which was not significant), were affected by punctuation removal. Otherwise, there is a strong correlation (Fig. 4 ) between Odds Ratios (causal vs. control) with punctuation and without punctuation, including casing and without casing ( $\rho = 0.71$ and $0.80$ , respectively), indicating the POS differences between the corpora were primarily not due to the removal of punctuation or casing. Acknowledgments We thank R. Gallagher for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. ISS-1447634.
Randomly from Twitter
ba48c095c496d01c7717eaa271470c3406bf2d7c
ba48c095c496d01c7717eaa271470c3406bf2d7c_0
Q: What languages do they experiment with? Text: Introduction Question answering (QA) with neural network, i.e. neural QA, is an active research direction along the road towards the long-term AI goal of building general dialogue agents BIBREF0 . Unlike conventional methods, neural QA does not rely on feature engineering and is (at least nearly) end-to-end trainable. It reduces the requirement for domain specific knowledge significantly and makes domain adaption easier. Therefore, it has attracted intensive attention in recent years. Resolving QA problem requires several fundamental abilities including reasoning, memorization, etc. Various neural methods have been proposed to improve such abilities, including neural tensor networks BIBREF1 , recursive networks BIBREF2 , convolution neural networks BIBREF3 , BIBREF4 , BIBREF5 , attention models BIBREF6 , BIBREF5 , BIBREF7 , and memories BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , etc. These methods achieve promising results on various datasets, which demonstrates the high potential of neural QA. However, we believe there are still two major challenges for neural QA: System development and/or evaluation on real-world data: Although several high quality and well-designed QA datasets have been proposed in recent years, there are still problems about using them to develop and/or evaluate QA system under real-world settings due to data size and the way they are created. For example, bAbI BIBREF0 and the 30M Factoid Question-Answer Corpus BIBREF13 are artificially synthesized; the TREC datasets BIBREF14 , Free917 BIBREF15 and WebQuestions BIBREF16 are human generated but only have few thousands of questions; SimpleQuestions BIBREF11 and the CNN and Daily Mail news datasets BIBREF6 are large but generated under controlled conditions. Thus, a new large-scale real-world QA dataset is needed. A new design choice for answer production besides sequence generation and classification/ranking: Without loss of generality, the methods used for producing answers in existing neural QA works can be roughly categorized into the sequence generation type and the classification/ranking type. The former generates answers word by word, e.g. BIBREF0 , BIBREF10 , BIBREF6 . As it generally involves INLINEFORM0 computation over a large vocabulary, the computational cost is remarkably high and it is hard to produce answers with out-of-vocabulary word. The latter produces answers by classification over a predefined set of answers, e.g. BIBREF12 , or ranking given candidates by model score, e.g. BIBREF5 . Although it generally has lower computational cost than the former, it either also has difficulties in handling unseen answers or requires an extra candidate generating component which is hard for end-to-end training. Above all, we need a new design choice for answer production that is both computationally effective and capable of handling unseen words/answers. In this work, we address the above two challenges by a new dataset and a new neural QA model. Our contributions are two-fold: Experimental results show that our model outperforms baselines with a large margin on the WebQA dataset, indicating that it is effective. Furthermore, our model even achieves an F1 score of 70.97% on character-based input, which is comparable with the 74.69% F1 score on word-based input, demonstrating that our model is robust. Factoid QA as Sequence Labeling In this work, we focus on open-domain factoid QA. Taking Figure FIGREF3 as an example, we formalize the problem as follows: given each question Q, we have one or more evidences E, and the task is to produce the answer A, where an evidence is a piece of text of any length that contains relevant information to answer the question. The advantage of this formalization is that evidences can be retrieved from web or unstructured knowledge base, which can improve system coverage significantly. Inspired by BIBREF18 , we introduce end-to-end sequence labeling as a new design choice for answer production in neural QA. Given a question and an evidence, we use CRF BIBREF17 to assign a label to each word in the evidence to indicate whether the word is at the beginning (B), inside (I) or outside (O) of the answer (see Figure FIGREF3 for example). The key difference between our work and BIBREF18 is that BIBREF18 needs a lot work on feature engineering which further relies on POS/NER tagging, dependency parsing, question type analysis, etc. While we avoid feature engineering, and only use one single model to solve the problem. Furthermore, compared with sequence generation and classification/ranking methods for answer production, our method avoids expensive INLINEFORM0 computation and can handle unseen answers/words naturally in a principled way. Formally, we formalize QA as a sequence labeling problem as follows: suppose we have a vocabulary INLINEFORM0 of size INLINEFORM1 , given question INLINEFORM2 and evidence INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 are one-hot vectors of dimension INLINEFORM6 , and INLINEFORM7 and INLINEFORM8 are the number of words in the question and evidence respectively. The problem is to find the label sequence INLINEFORM9 which maximizes the conditional probability under parameter INLINEFORM10 DISPLAYFORM0 In this work, we model INLINEFORM0 by a neural network composed of LSTMs and CRF. Overview Figure FIGREF4 shows the structure of our model. The model consists of three components: (1) question LSTM for computing question representation; (2) evidence LSTMs for evidence analysis; and (3) a CRF layer for sequence labeling. The question LSTM in a form of a single layer LSTM equipped with a single time attention takes the question as input and generates the question representation INLINEFORM0 . The three-layer evidence LSTMs takes the evidence, question representation INLINEFORM1 and optional features as input and produces “features” for the CRF layer. The CRF layer takes the “features” as input and produces the label sequence. The details will be given in the following sections. Long Short-Term Memory (LSTM) Following BIBREF19 , we define INLINEFORM0 as a function mapping its input INLINEFORM1 , previous state INLINEFORM2 and output INLINEFORM3 to current state INLINEFORM4 and output INLINEFORM5 : DISPLAYFORM0 where INLINEFORM0 are parameter matrices, INLINEFORM1 are biases, INLINEFORM2 is LSTM layer width, INLINEFORM3 is the INLINEFORM4 function, INLINEFORM5 , INLINEFORM6 and INLINEFORM7 are the input gate, forget gate and output gate respectively. Question LSTM The question LSTM consists of a single-layer LSTM and a single-time attention model. The question INLINEFORM0 is fed into the LSTM to produce a sequence of vector representations INLINEFORM1 DISPLAYFORM0 where INLINEFORM0 is the embedding matrix and INLINEFORM1 is word embedding dimension. Then a weight INLINEFORM2 is computed by the single-time attention model for each INLINEFORM3 DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 . And finally the weighted average INLINEFORM2 of INLINEFORM3 is used as the representation of the question DISPLAYFORM0 Evidence LSTMs The three-layer evidence LSTMs processes evidence INLINEFORM0 INLINEFORM1 to produce “features” for the CRF layer. The first LSTM layer takes evidence INLINEFORM0 , question representation INLINEFORM1 and optional features as input. We find the following two simple common word indicator features are effective: Question-Evidence common word feature (q-e.comm): for each word in the evidence, the feature has value 1 when the word also occurs in the question, otherwise 0. The intuition is that words occurring in questions tend not to be part of the answers for factoid questions. Evidence-Evidence common word feature (e-e.comm): for each word in the evidence, the feature has value 1 when the word occurs in another evidence, otherwise 0. The intuition is that words shared by two or more evidences are more likely to be part of the answers. Although counterintuitive, we found non-binary e-e.comm feature values does not work well. Because the more evidences we considered, the more words tend to get non-zero feature values, and the less discriminative the feature is. The second LSTM layer stacks on top of the first LSTM layer, but processes its output in a reverse order. The third LSTM layer stacks upon the first and second LSTM layers with cross layer links, and its output serves as features for CRF layer. Formally, the computations are defined as follows DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are one-hot feature vectors, INLINEFORM2 and INLINEFORM3 are embeddings for the features, and INLINEFORM4 and INLINEFORM5 are the feature embedding dimensions. Note that we use the same word embedding matrix INLINEFORM6 as in question LSTM. Sequence Labeling Following BIBREF20 , BIBREF21 , we use CRF on top of evidence LSTMs for sequence labeling. The probability of a label sequence INLINEFORM0 given question INLINEFORM1 and evidence INLINEFORM2 is computed as DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 is the number of label types, INLINEFORM3 is the transition weight from label INLINEFORM4 to INLINEFORM5 , and INLINEFORM6 is the INLINEFORM7 -th value of vector INLINEFORM8 . Training The objective function of our model is INLINEFORM0 where INLINEFORM0 is the golden label sequence, and INLINEFORM1 is training set. We use a minibatch stochastic gradient descent (SGD) BIBREF22 algorithm with rmsprop BIBREF23 to minimize the objective function. The initial learning rate is 0.001, batch size is 120, and INLINEFORM0 . We also apply dropout BIBREF24 to the output of all the LSTM layers. The dropout rate is 0.05. All these hyper-parameters are determined empirically via grid search on validation set. WebQA Dataset In order to train and evaluate open-domain factoid QA system for real-world questions, we build a new Chinese QA dataset named as WebQA. The dataset consists of tuples of (question, evidences, answer), which is similar to example in Figure FIGREF3 . All the questions, evidences and answers are collected from web. Table TABREF20 shows some statistics of the dataset. The questions and answers are mainly collected from a large community QA website Baidu Zhidao and a small portion are from hand collected web documents. Therefore, all these questions are indeed asked by real-world users in daily life instead of under controlled conditions. All the questions are of single-entity factoid type, which means (1) each question is a factoid question and (2) its answer involves only one entity (but may have multiple words). The question in Figure FIGREF3 is a positive example, while the question “Who are the children of Albert Enistein?” is a counter example because the answer involves three persons. The type and correctness of all the question answer pairs are verified by at least two annotators. All the evidences are retrieved from Internet by using a search engine with questions as queries. We download web pages returned in the first 3 result pages and take all the text pieces which have no more than 5 sentences and include at least one question word as candidate evidences. As evidence retrieval is beyond the scope of this work, we simply use TF-IDF values to re-rank these candidates. For each question in the training set, we provide the top 10 ranked evidences to annotate (“Annotated Evidence” in Table TABREF20 ). An evidence is annotated as positive if the question can be answered by just reading the evidence without any other prior knowledge, otherwise negative. Only evidences whose annotations are agreed by at least two annotators are retained. We also provide trivial negative evidences (“Retrieved Evidence” in Table TABREF20 ), i.e. evidences that do not contain golden standard answers. For each question in the validation and test sets, we provide one major positive evidence, and maybe an additional positive one to compute features. Both of them are annotated. Raw retrieved evidences are also provided for evaluation purpose (“Retrieved Evidence” in Table TABREF20 ). The dataset will be released on the project page http://idl.baidu.com/WebQA.html. Baselines We compare our model with two sets of baselines: MemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 . It encodes question and evidence with a bag-of-word method and stores the representations of evidences in an external memory. A recurrent attention model is used to retrieve relevant information from the memory to answer the question. Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings. The simpler Attentive Reader uses a similar way as our work to compute attention for the evidence. And the more complex Impatient Reader computes attention after processing each question word. The key difference between our model and the two readers is that they produce answer by doing classification over a large vocabulary, which is computationally expensive and has difficulties in handling unseen words. However, as our model uses an end-to-end trainable sequence labeling technique, it avoids both of the two problems by its nature. Evaluation Method The performance is measured with precision (P), recall (R) and F1-measure (F1) DISPLAYFORM0 where INLINEFORM0 is the list of correctly answered questions, INLINEFORM1 is the list of produced answers, and INLINEFORM2 is the list of all questions . As WebQA is collected from web, the same answer may be expressed in different surface forms in the golden standard answer and the evidence, e.g. “北京 (Beijing)” v.s. “北京市 (Beijing province)”. Therefore, we use two ways to count correctly answered questions, which are referred to as “strict” and “fuzzy” in the tables: Strict matching: A question is counted if and only if the produced answer is identical to the golden standard answer; Fuzzy matching: A question is counted if and only if the produced answer is a synonym of the golden standard answer; And we also consider two evaluation settings: Annotated evidence: Each question has one major annotated evidence and maybe another annotated evidence for computing q-e.comm and e-e.comm features (Section SECREF14 ); Retrieved evidence: Each question is provided with at most 20 automatically retrieved evidences (see Section SECREF5 for details). All the evidences will be processed by our model independently and answers are voted by frequency to decide the final result. Note that a large amount of the evidences are negative and our model should not produce any answer for them. Model Settings If not specified, the following hyper-parameters will be used in the reset of this section: LSTM layer width INLINEFORM0 (Section SECREF7 ), word embedding dimension INLINEFORM1 (Section SECREF9 ), feature embedding dimension INLINEFORM2 (Section SECREF9 ). The word embeddings are initialized with pre-trained embeddings using a 5-gram neural language model BIBREF25 and is fixed during training. We will show that injecting noise data is important for improving performance on retrieved evidence setting in Section SECREF37 . In the following experiments, 20% of the training evidences will be negative ones randomly selected on the fly, of which 25% are annotated negative evidences and 75% are retrieved trivial negative evidences (Section SECREF5 ). The percentages are determined empirically. Intuitively, we provide the noise data to teach the model learning to recognize unreliable evidence. For each evidence, we will randomly sample another evidence from the rest evidences of the question and compare them to compute the e-e.comm feature (Section SECREF14 ). We will develop more powerful models to process multiple evidences in a more principle way in the future. As the answer for each question in our WebQA dataset only involves one entity (Section SECREF5 ), we distinguish label Os before and after the first B in the label sequence explicitly to discourage our model to produce multiple answers for a question. For example, the golden labels for the example evidence in Figure FIGREF3 will became “Einstein/O1 married/O1 his/O1 first/O1 wife/O1 Mileva/B Marić/I in/O2 1903/O2”, where we use “O1” and “O2” to denote label Os before and after the first B . “Fuzzy matching” is also used for computing golden standard labels for training set. For each setting, we will run three trials with different random seeds and report the average performance in the following sections. Comparison with Baselines As the baselines can only predict one-word answers, we only do experiments on the one-word answer subset of WebQA, i.e. only questions with one-word answers are retained for training, validation and test. As shown in Table TABREF23 , our model achieves significant higher F1 scores than all the baselines. The main reason for the relative low performance of MemN2N is that it uses a bag-of-word method to encode question and evidence such that higher order information like word order is absent to the model. We think its performance can be improved by designing more complex encoding methods BIBREF26 and leave it as a future work. The Attentive and Impatient Readers only have access to the fixed length representations when doing classification. However, our model has access to the outputs of all the time steps of the evidence LSTMs, and scores the label sequence as a whole. Therefore, our model achieves better performance. Evaluation on the Entire WebQA Dataset In this section, we evaluate our model on the entire WebQA dataset. The evaluation results are shown in Table TABREF24 . Although producing multi-word answers is harder, our model achieves comparable results with the one-word answer subset (Table TABREF23 ), demonstrating that our model is effective for both single-word and multi-word word settings. “Softmax” in Table TABREF24 means we replace CRF with INLINEFORM0 , i.e. replace Eq. ( EQREF19 ) with DISPLAYFORM0 CRF outperforms INLINEFORM0 significantly in all cases. The reason is that INLINEFORM1 predicts each label independently, suggesting that modeling label transition explicitly is essential for improving performance. A natural choice for modeling label transition in INLINEFORM2 is to take the last prediction into account as in BIBREF27 . The result is shown in Table TABREF24 as “Softmax( INLINEFORM3 -1)”. However, its performance is only comparable with “Softmax” and significantly lower than CRF. The reason is that we can enumerate all possible label sequences implicitly by dynamic programming for CRF during predicting but this is not possible for “Softmax( INLINEFORM4 -1)” , which indicates CRF is a better choice. “Noise” in Table TABREF24 means whether we inject noise data or not (Section SECREF34 ). As all evidences are positive under the annotated evidence setting, the ability for recognizing unreliable evidence will be useless. Therefore, the performance of our model with and without noise is comparable under the annotated evidence setting. However, the ability is important to improve the performance under the retrieved evidence setting because a large amount of the retrieved evidences are negative ones. As a result, we observe significant improvement by injecting noise data for this setting. Effect of Word Embedding As stated in Section SECREF34 , the word embedding INLINEFORM0 is initialized with LM embedding and kept fixed in training. We evaluate different initialization and optimization methods in this section. The evaluation results are shown in Table TABREF40 . The second row shows the results when the embedding is optimized jointly during training. The performance drops significantly. Detailed analysis reveals that the trainable embedding enlarge trainable parameter number and the model gets over fitting easily. The model acts like a context independent entity tagger to some extend, which is not desired. For example, the model will try to find any location name in the evidence when the word “在哪 (where)” occurs in the question. In contrary, pre-trained fixed embedding forces the model to pay more attention to the latent syntactic regularities. And it also carries basic priors such as “梨 (pear)” is fruit and “李世石 (Lee Sedol)” is a person, thus the model will generalize better to test data with fixed embedding. The third row shows the result when the embedding is randomly initialized and jointly optimized. The performance drops significantly further, suggesting that pre-trained embedding indeed carries meaningful priors. Effect of q-e.comm and e-e.comm Features As shown in Table TABREF41 , both the q-e.comm and e-e.comm features are effective, and the q-e.comm feature contributes more to the overall performance. The reason is that the interaction between question and evidence is limited and q-e.comm feature with value 1, i.e. the corresponding word also occurs in the question, is a strong indication that the word may not be part of the answer. Effect of Question Representations In this section, we compare the single-time attention method for computing INLINEFORM0 ( INLINEFORM1 , Eq. ( EQREF12 , EQREF13 )) with two widely used options: element-wise max operation INLINEFORM2 : INLINEFORM3 and element-wise average operation INLINEFORM4 : INLINEFORM5 . Intuitively, INLINEFORM6 can distill information in a more flexible way from { INLINEFORM7 }, while INLINEFORM8 tends to hide the differences between them, and INLINEFORM9 lies between INLINEFORM10 and INLINEFORM11 . The results in Table TABREF41 suggest that the more flexible and selective the operation is, the better the performance is. Effect of Evidence LSTMs Structures We investigate the effect of evidence LSTMs layer number, layer width and cross layer links in this section. The results are shown in Figure TABREF46 . For fair comparison, we do not use cross layer links in Figure TABREF46 (a) (dotted lines in Figure FIGREF4 ), and highlight the results with cross layer links (layer width 64) with circle and square for retrieved and annotated evidence settings respectively. We can conclude that: (1) generally the deeper and wider the model is, the better the performance is; (2) cross layer links are effective as they make the third evidence LSTM layer see information in both directions. Word-based v.s. Character-based Input Our model achieves fuzzy matching F1 scores of 69.78% and 70.97% on character-based input in annotated and retrieved evidence settings respectively (Table TABREF46 ), which are only 3.72 and 3.72 points lower than the corresponding scores on word-based input respectively. The performance is promising, demonstrating that our model is robust and effective. Conclusion and Future Work In this work, we build a new human annotated real-world QA dataset WebQA for developing and evaluating QA system on real-world QA data. We also propose a new end-to-end recurrent sequence labeling model for QA. Experimental results show that our model outperforms baselines significantly. There are several future directions we plan to pursue. First, multi-entity factoid and non-factoid QA are also interesting topics. Second, we plan to extend our model to multi-evidence cases. Finally, inspired by Residual Network BIBREF28 , we will investigate deeper and wider models in the future.
Chinese
42a61773aa494f7b12838f71a949034c12084de1
42a61773aa494f7b12838f71a949034c12084de1_0
Q: What are the baselines? Text: Introduction Question answering (QA) with neural network, i.e. neural QA, is an active research direction along the road towards the long-term AI goal of building general dialogue agents BIBREF0 . Unlike conventional methods, neural QA does not rely on feature engineering and is (at least nearly) end-to-end trainable. It reduces the requirement for domain specific knowledge significantly and makes domain adaption easier. Therefore, it has attracted intensive attention in recent years. Resolving QA problem requires several fundamental abilities including reasoning, memorization, etc. Various neural methods have been proposed to improve such abilities, including neural tensor networks BIBREF1 , recursive networks BIBREF2 , convolution neural networks BIBREF3 , BIBREF4 , BIBREF5 , attention models BIBREF6 , BIBREF5 , BIBREF7 , and memories BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , etc. These methods achieve promising results on various datasets, which demonstrates the high potential of neural QA. However, we believe there are still two major challenges for neural QA: System development and/or evaluation on real-world data: Although several high quality and well-designed QA datasets have been proposed in recent years, there are still problems about using them to develop and/or evaluate QA system under real-world settings due to data size and the way they are created. For example, bAbI BIBREF0 and the 30M Factoid Question-Answer Corpus BIBREF13 are artificially synthesized; the TREC datasets BIBREF14 , Free917 BIBREF15 and WebQuestions BIBREF16 are human generated but only have few thousands of questions; SimpleQuestions BIBREF11 and the CNN and Daily Mail news datasets BIBREF6 are large but generated under controlled conditions. Thus, a new large-scale real-world QA dataset is needed. A new design choice for answer production besides sequence generation and classification/ranking: Without loss of generality, the methods used for producing answers in existing neural QA works can be roughly categorized into the sequence generation type and the classification/ranking type. The former generates answers word by word, e.g. BIBREF0 , BIBREF10 , BIBREF6 . As it generally involves INLINEFORM0 computation over a large vocabulary, the computational cost is remarkably high and it is hard to produce answers with out-of-vocabulary word. The latter produces answers by classification over a predefined set of answers, e.g. BIBREF12 , or ranking given candidates by model score, e.g. BIBREF5 . Although it generally has lower computational cost than the former, it either also has difficulties in handling unseen answers or requires an extra candidate generating component which is hard for end-to-end training. Above all, we need a new design choice for answer production that is both computationally effective and capable of handling unseen words/answers. In this work, we address the above two challenges by a new dataset and a new neural QA model. Our contributions are two-fold: Experimental results show that our model outperforms baselines with a large margin on the WebQA dataset, indicating that it is effective. Furthermore, our model even achieves an F1 score of 70.97% on character-based input, which is comparable with the 74.69% F1 score on word-based input, demonstrating that our model is robust. Factoid QA as Sequence Labeling In this work, we focus on open-domain factoid QA. Taking Figure FIGREF3 as an example, we formalize the problem as follows: given each question Q, we have one or more evidences E, and the task is to produce the answer A, where an evidence is a piece of text of any length that contains relevant information to answer the question. The advantage of this formalization is that evidences can be retrieved from web or unstructured knowledge base, which can improve system coverage significantly. Inspired by BIBREF18 , we introduce end-to-end sequence labeling as a new design choice for answer production in neural QA. Given a question and an evidence, we use CRF BIBREF17 to assign a label to each word in the evidence to indicate whether the word is at the beginning (B), inside (I) or outside (O) of the answer (see Figure FIGREF3 for example). The key difference between our work and BIBREF18 is that BIBREF18 needs a lot work on feature engineering which further relies on POS/NER tagging, dependency parsing, question type analysis, etc. While we avoid feature engineering, and only use one single model to solve the problem. Furthermore, compared with sequence generation and classification/ranking methods for answer production, our method avoids expensive INLINEFORM0 computation and can handle unseen answers/words naturally in a principled way. Formally, we formalize QA as a sequence labeling problem as follows: suppose we have a vocabulary INLINEFORM0 of size INLINEFORM1 , given question INLINEFORM2 and evidence INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 are one-hot vectors of dimension INLINEFORM6 , and INLINEFORM7 and INLINEFORM8 are the number of words in the question and evidence respectively. The problem is to find the label sequence INLINEFORM9 which maximizes the conditional probability under parameter INLINEFORM10 DISPLAYFORM0 In this work, we model INLINEFORM0 by a neural network composed of LSTMs and CRF. Overview Figure FIGREF4 shows the structure of our model. The model consists of three components: (1) question LSTM for computing question representation; (2) evidence LSTMs for evidence analysis; and (3) a CRF layer for sequence labeling. The question LSTM in a form of a single layer LSTM equipped with a single time attention takes the question as input and generates the question representation INLINEFORM0 . The three-layer evidence LSTMs takes the evidence, question representation INLINEFORM1 and optional features as input and produces “features” for the CRF layer. The CRF layer takes the “features” as input and produces the label sequence. The details will be given in the following sections. Long Short-Term Memory (LSTM) Following BIBREF19 , we define INLINEFORM0 as a function mapping its input INLINEFORM1 , previous state INLINEFORM2 and output INLINEFORM3 to current state INLINEFORM4 and output INLINEFORM5 : DISPLAYFORM0 where INLINEFORM0 are parameter matrices, INLINEFORM1 are biases, INLINEFORM2 is LSTM layer width, INLINEFORM3 is the INLINEFORM4 function, INLINEFORM5 , INLINEFORM6 and INLINEFORM7 are the input gate, forget gate and output gate respectively. Question LSTM The question LSTM consists of a single-layer LSTM and a single-time attention model. The question INLINEFORM0 is fed into the LSTM to produce a sequence of vector representations INLINEFORM1 DISPLAYFORM0 where INLINEFORM0 is the embedding matrix and INLINEFORM1 is word embedding dimension. Then a weight INLINEFORM2 is computed by the single-time attention model for each INLINEFORM3 DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 . And finally the weighted average INLINEFORM2 of INLINEFORM3 is used as the representation of the question DISPLAYFORM0 Evidence LSTMs The three-layer evidence LSTMs processes evidence INLINEFORM0 INLINEFORM1 to produce “features” for the CRF layer. The first LSTM layer takes evidence INLINEFORM0 , question representation INLINEFORM1 and optional features as input. We find the following two simple common word indicator features are effective: Question-Evidence common word feature (q-e.comm): for each word in the evidence, the feature has value 1 when the word also occurs in the question, otherwise 0. The intuition is that words occurring in questions tend not to be part of the answers for factoid questions. Evidence-Evidence common word feature (e-e.comm): for each word in the evidence, the feature has value 1 when the word occurs in another evidence, otherwise 0. The intuition is that words shared by two or more evidences are more likely to be part of the answers. Although counterintuitive, we found non-binary e-e.comm feature values does not work well. Because the more evidences we considered, the more words tend to get non-zero feature values, and the less discriminative the feature is. The second LSTM layer stacks on top of the first LSTM layer, but processes its output in a reverse order. The third LSTM layer stacks upon the first and second LSTM layers with cross layer links, and its output serves as features for CRF layer. Formally, the computations are defined as follows DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are one-hot feature vectors, INLINEFORM2 and INLINEFORM3 are embeddings for the features, and INLINEFORM4 and INLINEFORM5 are the feature embedding dimensions. Note that we use the same word embedding matrix INLINEFORM6 as in question LSTM. Sequence Labeling Following BIBREF20 , BIBREF21 , we use CRF on top of evidence LSTMs for sequence labeling. The probability of a label sequence INLINEFORM0 given question INLINEFORM1 and evidence INLINEFORM2 is computed as DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 is the number of label types, INLINEFORM3 is the transition weight from label INLINEFORM4 to INLINEFORM5 , and INLINEFORM6 is the INLINEFORM7 -th value of vector INLINEFORM8 . Training The objective function of our model is INLINEFORM0 where INLINEFORM0 is the golden label sequence, and INLINEFORM1 is training set. We use a minibatch stochastic gradient descent (SGD) BIBREF22 algorithm with rmsprop BIBREF23 to minimize the objective function. The initial learning rate is 0.001, batch size is 120, and INLINEFORM0 . We also apply dropout BIBREF24 to the output of all the LSTM layers. The dropout rate is 0.05. All these hyper-parameters are determined empirically via grid search on validation set. WebQA Dataset In order to train and evaluate open-domain factoid QA system for real-world questions, we build a new Chinese QA dataset named as WebQA. The dataset consists of tuples of (question, evidences, answer), which is similar to example in Figure FIGREF3 . All the questions, evidences and answers are collected from web. Table TABREF20 shows some statistics of the dataset. The questions and answers are mainly collected from a large community QA website Baidu Zhidao and a small portion are from hand collected web documents. Therefore, all these questions are indeed asked by real-world users in daily life instead of under controlled conditions. All the questions are of single-entity factoid type, which means (1) each question is a factoid question and (2) its answer involves only one entity (but may have multiple words). The question in Figure FIGREF3 is a positive example, while the question “Who are the children of Albert Enistein?” is a counter example because the answer involves three persons. The type and correctness of all the question answer pairs are verified by at least two annotators. All the evidences are retrieved from Internet by using a search engine with questions as queries. We download web pages returned in the first 3 result pages and take all the text pieces which have no more than 5 sentences and include at least one question word as candidate evidences. As evidence retrieval is beyond the scope of this work, we simply use TF-IDF values to re-rank these candidates. For each question in the training set, we provide the top 10 ranked evidences to annotate (“Annotated Evidence” in Table TABREF20 ). An evidence is annotated as positive if the question can be answered by just reading the evidence without any other prior knowledge, otherwise negative. Only evidences whose annotations are agreed by at least two annotators are retained. We also provide trivial negative evidences (“Retrieved Evidence” in Table TABREF20 ), i.e. evidences that do not contain golden standard answers. For each question in the validation and test sets, we provide one major positive evidence, and maybe an additional positive one to compute features. Both of them are annotated. Raw retrieved evidences are also provided for evaluation purpose (“Retrieved Evidence” in Table TABREF20 ). The dataset will be released on the project page http://idl.baidu.com/WebQA.html. Baselines We compare our model with two sets of baselines: MemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 . It encodes question and evidence with a bag-of-word method and stores the representations of evidences in an external memory. A recurrent attention model is used to retrieve relevant information from the memory to answer the question. Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings. The simpler Attentive Reader uses a similar way as our work to compute attention for the evidence. And the more complex Impatient Reader computes attention after processing each question word. The key difference between our model and the two readers is that they produce answer by doing classification over a large vocabulary, which is computationally expensive and has difficulties in handling unseen words. However, as our model uses an end-to-end trainable sequence labeling technique, it avoids both of the two problems by its nature. Evaluation Method The performance is measured with precision (P), recall (R) and F1-measure (F1) DISPLAYFORM0 where INLINEFORM0 is the list of correctly answered questions, INLINEFORM1 is the list of produced answers, and INLINEFORM2 is the list of all questions . As WebQA is collected from web, the same answer may be expressed in different surface forms in the golden standard answer and the evidence, e.g. “北京 (Beijing)” v.s. “北京市 (Beijing province)”. Therefore, we use two ways to count correctly answered questions, which are referred to as “strict” and “fuzzy” in the tables: Strict matching: A question is counted if and only if the produced answer is identical to the golden standard answer; Fuzzy matching: A question is counted if and only if the produced answer is a synonym of the golden standard answer; And we also consider two evaluation settings: Annotated evidence: Each question has one major annotated evidence and maybe another annotated evidence for computing q-e.comm and e-e.comm features (Section SECREF14 ); Retrieved evidence: Each question is provided with at most 20 automatically retrieved evidences (see Section SECREF5 for details). All the evidences will be processed by our model independently and answers are voted by frequency to decide the final result. Note that a large amount of the evidences are negative and our model should not produce any answer for them. Model Settings If not specified, the following hyper-parameters will be used in the reset of this section: LSTM layer width INLINEFORM0 (Section SECREF7 ), word embedding dimension INLINEFORM1 (Section SECREF9 ), feature embedding dimension INLINEFORM2 (Section SECREF9 ). The word embeddings are initialized with pre-trained embeddings using a 5-gram neural language model BIBREF25 and is fixed during training. We will show that injecting noise data is important for improving performance on retrieved evidence setting in Section SECREF37 . In the following experiments, 20% of the training evidences will be negative ones randomly selected on the fly, of which 25% are annotated negative evidences and 75% are retrieved trivial negative evidences (Section SECREF5 ). The percentages are determined empirically. Intuitively, we provide the noise data to teach the model learning to recognize unreliable evidence. For each evidence, we will randomly sample another evidence from the rest evidences of the question and compare them to compute the e-e.comm feature (Section SECREF14 ). We will develop more powerful models to process multiple evidences in a more principle way in the future. As the answer for each question in our WebQA dataset only involves one entity (Section SECREF5 ), we distinguish label Os before and after the first B in the label sequence explicitly to discourage our model to produce multiple answers for a question. For example, the golden labels for the example evidence in Figure FIGREF3 will became “Einstein/O1 married/O1 his/O1 first/O1 wife/O1 Mileva/B Marić/I in/O2 1903/O2”, where we use “O1” and “O2” to denote label Os before and after the first B . “Fuzzy matching” is also used for computing golden standard labels for training set. For each setting, we will run three trials with different random seeds and report the average performance in the following sections. Comparison with Baselines As the baselines can only predict one-word answers, we only do experiments on the one-word answer subset of WebQA, i.e. only questions with one-word answers are retained for training, validation and test. As shown in Table TABREF23 , our model achieves significant higher F1 scores than all the baselines. The main reason for the relative low performance of MemN2N is that it uses a bag-of-word method to encode question and evidence such that higher order information like word order is absent to the model. We think its performance can be improved by designing more complex encoding methods BIBREF26 and leave it as a future work. The Attentive and Impatient Readers only have access to the fixed length representations when doing classification. However, our model has access to the outputs of all the time steps of the evidence LSTMs, and scores the label sequence as a whole. Therefore, our model achieves better performance. Evaluation on the Entire WebQA Dataset In this section, we evaluate our model on the entire WebQA dataset. The evaluation results are shown in Table TABREF24 . Although producing multi-word answers is harder, our model achieves comparable results with the one-word answer subset (Table TABREF23 ), demonstrating that our model is effective for both single-word and multi-word word settings. “Softmax” in Table TABREF24 means we replace CRF with INLINEFORM0 , i.e. replace Eq. ( EQREF19 ) with DISPLAYFORM0 CRF outperforms INLINEFORM0 significantly in all cases. The reason is that INLINEFORM1 predicts each label independently, suggesting that modeling label transition explicitly is essential for improving performance. A natural choice for modeling label transition in INLINEFORM2 is to take the last prediction into account as in BIBREF27 . The result is shown in Table TABREF24 as “Softmax( INLINEFORM3 -1)”. However, its performance is only comparable with “Softmax” and significantly lower than CRF. The reason is that we can enumerate all possible label sequences implicitly by dynamic programming for CRF during predicting but this is not possible for “Softmax( INLINEFORM4 -1)” , which indicates CRF is a better choice. “Noise” in Table TABREF24 means whether we inject noise data or not (Section SECREF34 ). As all evidences are positive under the annotated evidence setting, the ability for recognizing unreliable evidence will be useless. Therefore, the performance of our model with and without noise is comparable under the annotated evidence setting. However, the ability is important to improve the performance under the retrieved evidence setting because a large amount of the retrieved evidences are negative ones. As a result, we observe significant improvement by injecting noise data for this setting. Effect of Word Embedding As stated in Section SECREF34 , the word embedding INLINEFORM0 is initialized with LM embedding and kept fixed in training. We evaluate different initialization and optimization methods in this section. The evaluation results are shown in Table TABREF40 . The second row shows the results when the embedding is optimized jointly during training. The performance drops significantly. Detailed analysis reveals that the trainable embedding enlarge trainable parameter number and the model gets over fitting easily. The model acts like a context independent entity tagger to some extend, which is not desired. For example, the model will try to find any location name in the evidence when the word “在哪 (where)” occurs in the question. In contrary, pre-trained fixed embedding forces the model to pay more attention to the latent syntactic regularities. And it also carries basic priors such as “梨 (pear)” is fruit and “李世石 (Lee Sedol)” is a person, thus the model will generalize better to test data with fixed embedding. The third row shows the result when the embedding is randomly initialized and jointly optimized. The performance drops significantly further, suggesting that pre-trained embedding indeed carries meaningful priors. Effect of q-e.comm and e-e.comm Features As shown in Table TABREF41 , both the q-e.comm and e-e.comm features are effective, and the q-e.comm feature contributes more to the overall performance. The reason is that the interaction between question and evidence is limited and q-e.comm feature with value 1, i.e. the corresponding word also occurs in the question, is a strong indication that the word may not be part of the answer. Effect of Question Representations In this section, we compare the single-time attention method for computing INLINEFORM0 ( INLINEFORM1 , Eq. ( EQREF12 , EQREF13 )) with two widely used options: element-wise max operation INLINEFORM2 : INLINEFORM3 and element-wise average operation INLINEFORM4 : INLINEFORM5 . Intuitively, INLINEFORM6 can distill information in a more flexible way from { INLINEFORM7 }, while INLINEFORM8 tends to hide the differences between them, and INLINEFORM9 lies between INLINEFORM10 and INLINEFORM11 . The results in Table TABREF41 suggest that the more flexible and selective the operation is, the better the performance is. Effect of Evidence LSTMs Structures We investigate the effect of evidence LSTMs layer number, layer width and cross layer links in this section. The results are shown in Figure TABREF46 . For fair comparison, we do not use cross layer links in Figure TABREF46 (a) (dotted lines in Figure FIGREF4 ), and highlight the results with cross layer links (layer width 64) with circle and square for retrieved and annotated evidence settings respectively. We can conclude that: (1) generally the deeper and wider the model is, the better the performance is; (2) cross layer links are effective as they make the third evidence LSTM layer see information in both directions. Word-based v.s. Character-based Input Our model achieves fuzzy matching F1 scores of 69.78% and 70.97% on character-based input in annotated and retrieved evidence settings respectively (Table TABREF46 ), which are only 3.72 and 3.72 points lower than the corresponding scores on word-based input respectively. The performance is promising, demonstrating that our model is robust and effective. Conclusion and Future Work In this work, we build a new human annotated real-world QA dataset WebQA for developing and evaluating QA system on real-world QA data. We also propose a new end-to-end recurrent sequence labeling model for QA. Experimental results show that our model outperforms baselines significantly. There are several future directions we plan to pursue. First, multi-entity factoid and non-factoid QA are also interesting topics. Second, we plan to extend our model to multi-evidence cases. Finally, inspired by Residual Network BIBREF28 , we will investigate deeper and wider models in the future.
MemN2N BIBREF12, Attentive and Impatient Readers BIBREF6
48c3e61b2ed7b3f97706e2a522172bf9b51ec467
48c3e61b2ed7b3f97706e2a522172bf9b51ec467_0
Q: What was the inter-annotator agreement? Text: Introduction Question answering (QA) with neural network, i.e. neural QA, is an active research direction along the road towards the long-term AI goal of building general dialogue agents BIBREF0 . Unlike conventional methods, neural QA does not rely on feature engineering and is (at least nearly) end-to-end trainable. It reduces the requirement for domain specific knowledge significantly and makes domain adaption easier. Therefore, it has attracted intensive attention in recent years. Resolving QA problem requires several fundamental abilities including reasoning, memorization, etc. Various neural methods have been proposed to improve such abilities, including neural tensor networks BIBREF1 , recursive networks BIBREF2 , convolution neural networks BIBREF3 , BIBREF4 , BIBREF5 , attention models BIBREF6 , BIBREF5 , BIBREF7 , and memories BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , etc. These methods achieve promising results on various datasets, which demonstrates the high potential of neural QA. However, we believe there are still two major challenges for neural QA: System development and/or evaluation on real-world data: Although several high quality and well-designed QA datasets have been proposed in recent years, there are still problems about using them to develop and/or evaluate QA system under real-world settings due to data size and the way they are created. For example, bAbI BIBREF0 and the 30M Factoid Question-Answer Corpus BIBREF13 are artificially synthesized; the TREC datasets BIBREF14 , Free917 BIBREF15 and WebQuestions BIBREF16 are human generated but only have few thousands of questions; SimpleQuestions BIBREF11 and the CNN and Daily Mail news datasets BIBREF6 are large but generated under controlled conditions. Thus, a new large-scale real-world QA dataset is needed. A new design choice for answer production besides sequence generation and classification/ranking: Without loss of generality, the methods used for producing answers in existing neural QA works can be roughly categorized into the sequence generation type and the classification/ranking type. The former generates answers word by word, e.g. BIBREF0 , BIBREF10 , BIBREF6 . As it generally involves INLINEFORM0 computation over a large vocabulary, the computational cost is remarkably high and it is hard to produce answers with out-of-vocabulary word. The latter produces answers by classification over a predefined set of answers, e.g. BIBREF12 , or ranking given candidates by model score, e.g. BIBREF5 . Although it generally has lower computational cost than the former, it either also has difficulties in handling unseen answers or requires an extra candidate generating component which is hard for end-to-end training. Above all, we need a new design choice for answer production that is both computationally effective and capable of handling unseen words/answers. In this work, we address the above two challenges by a new dataset and a new neural QA model. Our contributions are two-fold: Experimental results show that our model outperforms baselines with a large margin on the WebQA dataset, indicating that it is effective. Furthermore, our model even achieves an F1 score of 70.97% on character-based input, which is comparable with the 74.69% F1 score on word-based input, demonstrating that our model is robust. Factoid QA as Sequence Labeling In this work, we focus on open-domain factoid QA. Taking Figure FIGREF3 as an example, we formalize the problem as follows: given each question Q, we have one or more evidences E, and the task is to produce the answer A, where an evidence is a piece of text of any length that contains relevant information to answer the question. The advantage of this formalization is that evidences can be retrieved from web or unstructured knowledge base, which can improve system coverage significantly. Inspired by BIBREF18 , we introduce end-to-end sequence labeling as a new design choice for answer production in neural QA. Given a question and an evidence, we use CRF BIBREF17 to assign a label to each word in the evidence to indicate whether the word is at the beginning (B), inside (I) or outside (O) of the answer (see Figure FIGREF3 for example). The key difference between our work and BIBREF18 is that BIBREF18 needs a lot work on feature engineering which further relies on POS/NER tagging, dependency parsing, question type analysis, etc. While we avoid feature engineering, and only use one single model to solve the problem. Furthermore, compared with sequence generation and classification/ranking methods for answer production, our method avoids expensive INLINEFORM0 computation and can handle unseen answers/words naturally in a principled way. Formally, we formalize QA as a sequence labeling problem as follows: suppose we have a vocabulary INLINEFORM0 of size INLINEFORM1 , given question INLINEFORM2 and evidence INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 are one-hot vectors of dimension INLINEFORM6 , and INLINEFORM7 and INLINEFORM8 are the number of words in the question and evidence respectively. The problem is to find the label sequence INLINEFORM9 which maximizes the conditional probability under parameter INLINEFORM10 DISPLAYFORM0 In this work, we model INLINEFORM0 by a neural network composed of LSTMs and CRF. Overview Figure FIGREF4 shows the structure of our model. The model consists of three components: (1) question LSTM for computing question representation; (2) evidence LSTMs for evidence analysis; and (3) a CRF layer for sequence labeling. The question LSTM in a form of a single layer LSTM equipped with a single time attention takes the question as input and generates the question representation INLINEFORM0 . The three-layer evidence LSTMs takes the evidence, question representation INLINEFORM1 and optional features as input and produces “features” for the CRF layer. The CRF layer takes the “features” as input and produces the label sequence. The details will be given in the following sections. Long Short-Term Memory (LSTM) Following BIBREF19 , we define INLINEFORM0 as a function mapping its input INLINEFORM1 , previous state INLINEFORM2 and output INLINEFORM3 to current state INLINEFORM4 and output INLINEFORM5 : DISPLAYFORM0 where INLINEFORM0 are parameter matrices, INLINEFORM1 are biases, INLINEFORM2 is LSTM layer width, INLINEFORM3 is the INLINEFORM4 function, INLINEFORM5 , INLINEFORM6 and INLINEFORM7 are the input gate, forget gate and output gate respectively. Question LSTM The question LSTM consists of a single-layer LSTM and a single-time attention model. The question INLINEFORM0 is fed into the LSTM to produce a sequence of vector representations INLINEFORM1 DISPLAYFORM0 where INLINEFORM0 is the embedding matrix and INLINEFORM1 is word embedding dimension. Then a weight INLINEFORM2 is computed by the single-time attention model for each INLINEFORM3 DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 . And finally the weighted average INLINEFORM2 of INLINEFORM3 is used as the representation of the question DISPLAYFORM0 Evidence LSTMs The three-layer evidence LSTMs processes evidence INLINEFORM0 INLINEFORM1 to produce “features” for the CRF layer. The first LSTM layer takes evidence INLINEFORM0 , question representation INLINEFORM1 and optional features as input. We find the following two simple common word indicator features are effective: Question-Evidence common word feature (q-e.comm): for each word in the evidence, the feature has value 1 when the word also occurs in the question, otherwise 0. The intuition is that words occurring in questions tend not to be part of the answers for factoid questions. Evidence-Evidence common word feature (e-e.comm): for each word in the evidence, the feature has value 1 when the word occurs in another evidence, otherwise 0. The intuition is that words shared by two or more evidences are more likely to be part of the answers. Although counterintuitive, we found non-binary e-e.comm feature values does not work well. Because the more evidences we considered, the more words tend to get non-zero feature values, and the less discriminative the feature is. The second LSTM layer stacks on top of the first LSTM layer, but processes its output in a reverse order. The third LSTM layer stacks upon the first and second LSTM layers with cross layer links, and its output serves as features for CRF layer. Formally, the computations are defined as follows DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are one-hot feature vectors, INLINEFORM2 and INLINEFORM3 are embeddings for the features, and INLINEFORM4 and INLINEFORM5 are the feature embedding dimensions. Note that we use the same word embedding matrix INLINEFORM6 as in question LSTM. Sequence Labeling Following BIBREF20 , BIBREF21 , we use CRF on top of evidence LSTMs for sequence labeling. The probability of a label sequence INLINEFORM0 given question INLINEFORM1 and evidence INLINEFORM2 is computed as DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 is the number of label types, INLINEFORM3 is the transition weight from label INLINEFORM4 to INLINEFORM5 , and INLINEFORM6 is the INLINEFORM7 -th value of vector INLINEFORM8 . Training The objective function of our model is INLINEFORM0 where INLINEFORM0 is the golden label sequence, and INLINEFORM1 is training set. We use a minibatch stochastic gradient descent (SGD) BIBREF22 algorithm with rmsprop BIBREF23 to minimize the objective function. The initial learning rate is 0.001, batch size is 120, and INLINEFORM0 . We also apply dropout BIBREF24 to the output of all the LSTM layers. The dropout rate is 0.05. All these hyper-parameters are determined empirically via grid search on validation set. WebQA Dataset In order to train and evaluate open-domain factoid QA system for real-world questions, we build a new Chinese QA dataset named as WebQA. The dataset consists of tuples of (question, evidences, answer), which is similar to example in Figure FIGREF3 . All the questions, evidences and answers are collected from web. Table TABREF20 shows some statistics of the dataset. The questions and answers are mainly collected from a large community QA website Baidu Zhidao and a small portion are from hand collected web documents. Therefore, all these questions are indeed asked by real-world users in daily life instead of under controlled conditions. All the questions are of single-entity factoid type, which means (1) each question is a factoid question and (2) its answer involves only one entity (but may have multiple words). The question in Figure FIGREF3 is a positive example, while the question “Who are the children of Albert Enistein?” is a counter example because the answer involves three persons. The type and correctness of all the question answer pairs are verified by at least two annotators. All the evidences are retrieved from Internet by using a search engine with questions as queries. We download web pages returned in the first 3 result pages and take all the text pieces which have no more than 5 sentences and include at least one question word as candidate evidences. As evidence retrieval is beyond the scope of this work, we simply use TF-IDF values to re-rank these candidates. For each question in the training set, we provide the top 10 ranked evidences to annotate (“Annotated Evidence” in Table TABREF20 ). An evidence is annotated as positive if the question can be answered by just reading the evidence without any other prior knowledge, otherwise negative. Only evidences whose annotations are agreed by at least two annotators are retained. We also provide trivial negative evidences (“Retrieved Evidence” in Table TABREF20 ), i.e. evidences that do not contain golden standard answers. For each question in the validation and test sets, we provide one major positive evidence, and maybe an additional positive one to compute features. Both of them are annotated. Raw retrieved evidences are also provided for evaluation purpose (“Retrieved Evidence” in Table TABREF20 ). The dataset will be released on the project page http://idl.baidu.com/WebQA.html. Baselines We compare our model with two sets of baselines: MemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 . It encodes question and evidence with a bag-of-word method and stores the representations of evidences in an external memory. A recurrent attention model is used to retrieve relevant information from the memory to answer the question. Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings. The simpler Attentive Reader uses a similar way as our work to compute attention for the evidence. And the more complex Impatient Reader computes attention after processing each question word. The key difference between our model and the two readers is that they produce answer by doing classification over a large vocabulary, which is computationally expensive and has difficulties in handling unseen words. However, as our model uses an end-to-end trainable sequence labeling technique, it avoids both of the two problems by its nature. Evaluation Method The performance is measured with precision (P), recall (R) and F1-measure (F1) DISPLAYFORM0 where INLINEFORM0 is the list of correctly answered questions, INLINEFORM1 is the list of produced answers, and INLINEFORM2 is the list of all questions . As WebQA is collected from web, the same answer may be expressed in different surface forms in the golden standard answer and the evidence, e.g. “北京 (Beijing)” v.s. “北京市 (Beijing province)”. Therefore, we use two ways to count correctly answered questions, which are referred to as “strict” and “fuzzy” in the tables: Strict matching: A question is counted if and only if the produced answer is identical to the golden standard answer; Fuzzy matching: A question is counted if and only if the produced answer is a synonym of the golden standard answer; And we also consider two evaluation settings: Annotated evidence: Each question has one major annotated evidence and maybe another annotated evidence for computing q-e.comm and e-e.comm features (Section SECREF14 ); Retrieved evidence: Each question is provided with at most 20 automatically retrieved evidences (see Section SECREF5 for details). All the evidences will be processed by our model independently and answers are voted by frequency to decide the final result. Note that a large amount of the evidences are negative and our model should not produce any answer for them. Model Settings If not specified, the following hyper-parameters will be used in the reset of this section: LSTM layer width INLINEFORM0 (Section SECREF7 ), word embedding dimension INLINEFORM1 (Section SECREF9 ), feature embedding dimension INLINEFORM2 (Section SECREF9 ). The word embeddings are initialized with pre-trained embeddings using a 5-gram neural language model BIBREF25 and is fixed during training. We will show that injecting noise data is important for improving performance on retrieved evidence setting in Section SECREF37 . In the following experiments, 20% of the training evidences will be negative ones randomly selected on the fly, of which 25% are annotated negative evidences and 75% are retrieved trivial negative evidences (Section SECREF5 ). The percentages are determined empirically. Intuitively, we provide the noise data to teach the model learning to recognize unreliable evidence. For each evidence, we will randomly sample another evidence from the rest evidences of the question and compare them to compute the e-e.comm feature (Section SECREF14 ). We will develop more powerful models to process multiple evidences in a more principle way in the future. As the answer for each question in our WebQA dataset only involves one entity (Section SECREF5 ), we distinguish label Os before and after the first B in the label sequence explicitly to discourage our model to produce multiple answers for a question. For example, the golden labels for the example evidence in Figure FIGREF3 will became “Einstein/O1 married/O1 his/O1 first/O1 wife/O1 Mileva/B Marić/I in/O2 1903/O2”, where we use “O1” and “O2” to denote label Os before and after the first B . “Fuzzy matching” is also used for computing golden standard labels for training set. For each setting, we will run three trials with different random seeds and report the average performance in the following sections. Comparison with Baselines As the baselines can only predict one-word answers, we only do experiments on the one-word answer subset of WebQA, i.e. only questions with one-word answers are retained for training, validation and test. As shown in Table TABREF23 , our model achieves significant higher F1 scores than all the baselines. The main reason for the relative low performance of MemN2N is that it uses a bag-of-word method to encode question and evidence such that higher order information like word order is absent to the model. We think its performance can be improved by designing more complex encoding methods BIBREF26 and leave it as a future work. The Attentive and Impatient Readers only have access to the fixed length representations when doing classification. However, our model has access to the outputs of all the time steps of the evidence LSTMs, and scores the label sequence as a whole. Therefore, our model achieves better performance. Evaluation on the Entire WebQA Dataset In this section, we evaluate our model on the entire WebQA dataset. The evaluation results are shown in Table TABREF24 . Although producing multi-word answers is harder, our model achieves comparable results with the one-word answer subset (Table TABREF23 ), demonstrating that our model is effective for both single-word and multi-word word settings. “Softmax” in Table TABREF24 means we replace CRF with INLINEFORM0 , i.e. replace Eq. ( EQREF19 ) with DISPLAYFORM0 CRF outperforms INLINEFORM0 significantly in all cases. The reason is that INLINEFORM1 predicts each label independently, suggesting that modeling label transition explicitly is essential for improving performance. A natural choice for modeling label transition in INLINEFORM2 is to take the last prediction into account as in BIBREF27 . The result is shown in Table TABREF24 as “Softmax( INLINEFORM3 -1)”. However, its performance is only comparable with “Softmax” and significantly lower than CRF. The reason is that we can enumerate all possible label sequences implicitly by dynamic programming for CRF during predicting but this is not possible for “Softmax( INLINEFORM4 -1)” , which indicates CRF is a better choice. “Noise” in Table TABREF24 means whether we inject noise data or not (Section SECREF34 ). As all evidences are positive under the annotated evidence setting, the ability for recognizing unreliable evidence will be useless. Therefore, the performance of our model with and without noise is comparable under the annotated evidence setting. However, the ability is important to improve the performance under the retrieved evidence setting because a large amount of the retrieved evidences are negative ones. As a result, we observe significant improvement by injecting noise data for this setting. Effect of Word Embedding As stated in Section SECREF34 , the word embedding INLINEFORM0 is initialized with LM embedding and kept fixed in training. We evaluate different initialization and optimization methods in this section. The evaluation results are shown in Table TABREF40 . The second row shows the results when the embedding is optimized jointly during training. The performance drops significantly. Detailed analysis reveals that the trainable embedding enlarge trainable parameter number and the model gets over fitting easily. The model acts like a context independent entity tagger to some extend, which is not desired. For example, the model will try to find any location name in the evidence when the word “在哪 (where)” occurs in the question. In contrary, pre-trained fixed embedding forces the model to pay more attention to the latent syntactic regularities. And it also carries basic priors such as “梨 (pear)” is fruit and “李世石 (Lee Sedol)” is a person, thus the model will generalize better to test data with fixed embedding. The third row shows the result when the embedding is randomly initialized and jointly optimized. The performance drops significantly further, suggesting that pre-trained embedding indeed carries meaningful priors. Effect of q-e.comm and e-e.comm Features As shown in Table TABREF41 , both the q-e.comm and e-e.comm features are effective, and the q-e.comm feature contributes more to the overall performance. The reason is that the interaction between question and evidence is limited and q-e.comm feature with value 1, i.e. the corresponding word also occurs in the question, is a strong indication that the word may not be part of the answer. Effect of Question Representations In this section, we compare the single-time attention method for computing INLINEFORM0 ( INLINEFORM1 , Eq. ( EQREF12 , EQREF13 )) with two widely used options: element-wise max operation INLINEFORM2 : INLINEFORM3 and element-wise average operation INLINEFORM4 : INLINEFORM5 . Intuitively, INLINEFORM6 can distill information in a more flexible way from { INLINEFORM7 }, while INLINEFORM8 tends to hide the differences between them, and INLINEFORM9 lies between INLINEFORM10 and INLINEFORM11 . The results in Table TABREF41 suggest that the more flexible and selective the operation is, the better the performance is. Effect of Evidence LSTMs Structures We investigate the effect of evidence LSTMs layer number, layer width and cross layer links in this section. The results are shown in Figure TABREF46 . For fair comparison, we do not use cross layer links in Figure TABREF46 (a) (dotted lines in Figure FIGREF4 ), and highlight the results with cross layer links (layer width 64) with circle and square for retrieved and annotated evidence settings respectively. We can conclude that: (1) generally the deeper and wider the model is, the better the performance is; (2) cross layer links are effective as they make the third evidence LSTM layer see information in both directions. Word-based v.s. Character-based Input Our model achieves fuzzy matching F1 scores of 69.78% and 70.97% on character-based input in annotated and retrieved evidence settings respectively (Table TABREF46 ), which are only 3.72 and 3.72 points lower than the corresponding scores on word-based input respectively. The performance is promising, demonstrating that our model is robust and effective. Conclusion and Future Work In this work, we build a new human annotated real-world QA dataset WebQA for developing and evaluating QA system on real-world QA data. We also propose a new end-to-end recurrent sequence labeling model for QA. Experimental results show that our model outperforms baselines significantly. There are several future directions we plan to pursue. First, multi-entity factoid and non-factoid QA are also interesting topics. Second, we plan to extend our model to multi-evidence cases. Finally, inspired by Residual Network BIBREF28 , we will investigate deeper and wider models in the future.
correctness of all the question answer pairs are verified by at least two annotators
61fba3ab10f7b6906e27b028fb1d42ec601c3fb8
61fba3ab10f7b6906e27b028fb1d42ec601c3fb8_0
Q: Did they use a crowdsourcing platform? Text: Introduction Question answering (QA) with neural network, i.e. neural QA, is an active research direction along the road towards the long-term AI goal of building general dialogue agents BIBREF0 . Unlike conventional methods, neural QA does not rely on feature engineering and is (at least nearly) end-to-end trainable. It reduces the requirement for domain specific knowledge significantly and makes domain adaption easier. Therefore, it has attracted intensive attention in recent years. Resolving QA problem requires several fundamental abilities including reasoning, memorization, etc. Various neural methods have been proposed to improve such abilities, including neural tensor networks BIBREF1 , recursive networks BIBREF2 , convolution neural networks BIBREF3 , BIBREF4 , BIBREF5 , attention models BIBREF6 , BIBREF5 , BIBREF7 , and memories BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , etc. These methods achieve promising results on various datasets, which demonstrates the high potential of neural QA. However, we believe there are still two major challenges for neural QA: System development and/or evaluation on real-world data: Although several high quality and well-designed QA datasets have been proposed in recent years, there are still problems about using them to develop and/or evaluate QA system under real-world settings due to data size and the way they are created. For example, bAbI BIBREF0 and the 30M Factoid Question-Answer Corpus BIBREF13 are artificially synthesized; the TREC datasets BIBREF14 , Free917 BIBREF15 and WebQuestions BIBREF16 are human generated but only have few thousands of questions; SimpleQuestions BIBREF11 and the CNN and Daily Mail news datasets BIBREF6 are large but generated under controlled conditions. Thus, a new large-scale real-world QA dataset is needed. A new design choice for answer production besides sequence generation and classification/ranking: Without loss of generality, the methods used for producing answers in existing neural QA works can be roughly categorized into the sequence generation type and the classification/ranking type. The former generates answers word by word, e.g. BIBREF0 , BIBREF10 , BIBREF6 . As it generally involves INLINEFORM0 computation over a large vocabulary, the computational cost is remarkably high and it is hard to produce answers with out-of-vocabulary word. The latter produces answers by classification over a predefined set of answers, e.g. BIBREF12 , or ranking given candidates by model score, e.g. BIBREF5 . Although it generally has lower computational cost than the former, it either also has difficulties in handling unseen answers or requires an extra candidate generating component which is hard for end-to-end training. Above all, we need a new design choice for answer production that is both computationally effective and capable of handling unseen words/answers. In this work, we address the above two challenges by a new dataset and a new neural QA model. Our contributions are two-fold: Experimental results show that our model outperforms baselines with a large margin on the WebQA dataset, indicating that it is effective. Furthermore, our model even achieves an F1 score of 70.97% on character-based input, which is comparable with the 74.69% F1 score on word-based input, demonstrating that our model is robust. Factoid QA as Sequence Labeling In this work, we focus on open-domain factoid QA. Taking Figure FIGREF3 as an example, we formalize the problem as follows: given each question Q, we have one or more evidences E, and the task is to produce the answer A, where an evidence is a piece of text of any length that contains relevant information to answer the question. The advantage of this formalization is that evidences can be retrieved from web or unstructured knowledge base, which can improve system coverage significantly. Inspired by BIBREF18 , we introduce end-to-end sequence labeling as a new design choice for answer production in neural QA. Given a question and an evidence, we use CRF BIBREF17 to assign a label to each word in the evidence to indicate whether the word is at the beginning (B), inside (I) or outside (O) of the answer (see Figure FIGREF3 for example). The key difference between our work and BIBREF18 is that BIBREF18 needs a lot work on feature engineering which further relies on POS/NER tagging, dependency parsing, question type analysis, etc. While we avoid feature engineering, and only use one single model to solve the problem. Furthermore, compared with sequence generation and classification/ranking methods for answer production, our method avoids expensive INLINEFORM0 computation and can handle unseen answers/words naturally in a principled way. Formally, we formalize QA as a sequence labeling problem as follows: suppose we have a vocabulary INLINEFORM0 of size INLINEFORM1 , given question INLINEFORM2 and evidence INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 are one-hot vectors of dimension INLINEFORM6 , and INLINEFORM7 and INLINEFORM8 are the number of words in the question and evidence respectively. The problem is to find the label sequence INLINEFORM9 which maximizes the conditional probability under parameter INLINEFORM10 DISPLAYFORM0 In this work, we model INLINEFORM0 by a neural network composed of LSTMs and CRF. Overview Figure FIGREF4 shows the structure of our model. The model consists of three components: (1) question LSTM for computing question representation; (2) evidence LSTMs for evidence analysis; and (3) a CRF layer for sequence labeling. The question LSTM in a form of a single layer LSTM equipped with a single time attention takes the question as input and generates the question representation INLINEFORM0 . The three-layer evidence LSTMs takes the evidence, question representation INLINEFORM1 and optional features as input and produces “features” for the CRF layer. The CRF layer takes the “features” as input and produces the label sequence. The details will be given in the following sections. Long Short-Term Memory (LSTM) Following BIBREF19 , we define INLINEFORM0 as a function mapping its input INLINEFORM1 , previous state INLINEFORM2 and output INLINEFORM3 to current state INLINEFORM4 and output INLINEFORM5 : DISPLAYFORM0 where INLINEFORM0 are parameter matrices, INLINEFORM1 are biases, INLINEFORM2 is LSTM layer width, INLINEFORM3 is the INLINEFORM4 function, INLINEFORM5 , INLINEFORM6 and INLINEFORM7 are the input gate, forget gate and output gate respectively. Question LSTM The question LSTM consists of a single-layer LSTM and a single-time attention model. The question INLINEFORM0 is fed into the LSTM to produce a sequence of vector representations INLINEFORM1 DISPLAYFORM0 where INLINEFORM0 is the embedding matrix and INLINEFORM1 is word embedding dimension. Then a weight INLINEFORM2 is computed by the single-time attention model for each INLINEFORM3 DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 . And finally the weighted average INLINEFORM2 of INLINEFORM3 is used as the representation of the question DISPLAYFORM0 Evidence LSTMs The three-layer evidence LSTMs processes evidence INLINEFORM0 INLINEFORM1 to produce “features” for the CRF layer. The first LSTM layer takes evidence INLINEFORM0 , question representation INLINEFORM1 and optional features as input. We find the following two simple common word indicator features are effective: Question-Evidence common word feature (q-e.comm): for each word in the evidence, the feature has value 1 when the word also occurs in the question, otherwise 0. The intuition is that words occurring in questions tend not to be part of the answers for factoid questions. Evidence-Evidence common word feature (e-e.comm): for each word in the evidence, the feature has value 1 when the word occurs in another evidence, otherwise 0. The intuition is that words shared by two or more evidences are more likely to be part of the answers. Although counterintuitive, we found non-binary e-e.comm feature values does not work well. Because the more evidences we considered, the more words tend to get non-zero feature values, and the less discriminative the feature is. The second LSTM layer stacks on top of the first LSTM layer, but processes its output in a reverse order. The third LSTM layer stacks upon the first and second LSTM layers with cross layer links, and its output serves as features for CRF layer. Formally, the computations are defined as follows DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are one-hot feature vectors, INLINEFORM2 and INLINEFORM3 are embeddings for the features, and INLINEFORM4 and INLINEFORM5 are the feature embedding dimensions. Note that we use the same word embedding matrix INLINEFORM6 as in question LSTM. Sequence Labeling Following BIBREF20 , BIBREF21 , we use CRF on top of evidence LSTMs for sequence labeling. The probability of a label sequence INLINEFORM0 given question INLINEFORM1 and evidence INLINEFORM2 is computed as DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 is the number of label types, INLINEFORM3 is the transition weight from label INLINEFORM4 to INLINEFORM5 , and INLINEFORM6 is the INLINEFORM7 -th value of vector INLINEFORM8 . Training The objective function of our model is INLINEFORM0 where INLINEFORM0 is the golden label sequence, and INLINEFORM1 is training set. We use a minibatch stochastic gradient descent (SGD) BIBREF22 algorithm with rmsprop BIBREF23 to minimize the objective function. The initial learning rate is 0.001, batch size is 120, and INLINEFORM0 . We also apply dropout BIBREF24 to the output of all the LSTM layers. The dropout rate is 0.05. All these hyper-parameters are determined empirically via grid search on validation set. WebQA Dataset In order to train and evaluate open-domain factoid QA system for real-world questions, we build a new Chinese QA dataset named as WebQA. The dataset consists of tuples of (question, evidences, answer), which is similar to example in Figure FIGREF3 . All the questions, evidences and answers are collected from web. Table TABREF20 shows some statistics of the dataset. The questions and answers are mainly collected from a large community QA website Baidu Zhidao and a small portion are from hand collected web documents. Therefore, all these questions are indeed asked by real-world users in daily life instead of under controlled conditions. All the questions are of single-entity factoid type, which means (1) each question is a factoid question and (2) its answer involves only one entity (but may have multiple words). The question in Figure FIGREF3 is a positive example, while the question “Who are the children of Albert Enistein?” is a counter example because the answer involves three persons. The type and correctness of all the question answer pairs are verified by at least two annotators. All the evidences are retrieved from Internet by using a search engine with questions as queries. We download web pages returned in the first 3 result pages and take all the text pieces which have no more than 5 sentences and include at least one question word as candidate evidences. As evidence retrieval is beyond the scope of this work, we simply use TF-IDF values to re-rank these candidates. For each question in the training set, we provide the top 10 ranked evidences to annotate (“Annotated Evidence” in Table TABREF20 ). An evidence is annotated as positive if the question can be answered by just reading the evidence without any other prior knowledge, otherwise negative. Only evidences whose annotations are agreed by at least two annotators are retained. We also provide trivial negative evidences (“Retrieved Evidence” in Table TABREF20 ), i.e. evidences that do not contain golden standard answers. For each question in the validation and test sets, we provide one major positive evidence, and maybe an additional positive one to compute features. Both of them are annotated. Raw retrieved evidences are also provided for evaluation purpose (“Retrieved Evidence” in Table TABREF20 ). The dataset will be released on the project page http://idl.baidu.com/WebQA.html. Baselines We compare our model with two sets of baselines: MemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 . It encodes question and evidence with a bag-of-word method and stores the representations of evidences in an external memory. A recurrent attention model is used to retrieve relevant information from the memory to answer the question. Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings. The simpler Attentive Reader uses a similar way as our work to compute attention for the evidence. And the more complex Impatient Reader computes attention after processing each question word. The key difference between our model and the two readers is that they produce answer by doing classification over a large vocabulary, which is computationally expensive and has difficulties in handling unseen words. However, as our model uses an end-to-end trainable sequence labeling technique, it avoids both of the two problems by its nature. Evaluation Method The performance is measured with precision (P), recall (R) and F1-measure (F1) DISPLAYFORM0 where INLINEFORM0 is the list of correctly answered questions, INLINEFORM1 is the list of produced answers, and INLINEFORM2 is the list of all questions . As WebQA is collected from web, the same answer may be expressed in different surface forms in the golden standard answer and the evidence, e.g. “北京 (Beijing)” v.s. “北京市 (Beijing province)”. Therefore, we use two ways to count correctly answered questions, which are referred to as “strict” and “fuzzy” in the tables: Strict matching: A question is counted if and only if the produced answer is identical to the golden standard answer; Fuzzy matching: A question is counted if and only if the produced answer is a synonym of the golden standard answer; And we also consider two evaluation settings: Annotated evidence: Each question has one major annotated evidence and maybe another annotated evidence for computing q-e.comm and e-e.comm features (Section SECREF14 ); Retrieved evidence: Each question is provided with at most 20 automatically retrieved evidences (see Section SECREF5 for details). All the evidences will be processed by our model independently and answers are voted by frequency to decide the final result. Note that a large amount of the evidences are negative and our model should not produce any answer for them. Model Settings If not specified, the following hyper-parameters will be used in the reset of this section: LSTM layer width INLINEFORM0 (Section SECREF7 ), word embedding dimension INLINEFORM1 (Section SECREF9 ), feature embedding dimension INLINEFORM2 (Section SECREF9 ). The word embeddings are initialized with pre-trained embeddings using a 5-gram neural language model BIBREF25 and is fixed during training. We will show that injecting noise data is important for improving performance on retrieved evidence setting in Section SECREF37 . In the following experiments, 20% of the training evidences will be negative ones randomly selected on the fly, of which 25% are annotated negative evidences and 75% are retrieved trivial negative evidences (Section SECREF5 ). The percentages are determined empirically. Intuitively, we provide the noise data to teach the model learning to recognize unreliable evidence. For each evidence, we will randomly sample another evidence from the rest evidences of the question and compare them to compute the e-e.comm feature (Section SECREF14 ). We will develop more powerful models to process multiple evidences in a more principle way in the future. As the answer for each question in our WebQA dataset only involves one entity (Section SECREF5 ), we distinguish label Os before and after the first B in the label sequence explicitly to discourage our model to produce multiple answers for a question. For example, the golden labels for the example evidence in Figure FIGREF3 will became “Einstein/O1 married/O1 his/O1 first/O1 wife/O1 Mileva/B Marić/I in/O2 1903/O2”, where we use “O1” and “O2” to denote label Os before and after the first B . “Fuzzy matching” is also used for computing golden standard labels for training set. For each setting, we will run three trials with different random seeds and report the average performance in the following sections. Comparison with Baselines As the baselines can only predict one-word answers, we only do experiments on the one-word answer subset of WebQA, i.e. only questions with one-word answers are retained for training, validation and test. As shown in Table TABREF23 , our model achieves significant higher F1 scores than all the baselines. The main reason for the relative low performance of MemN2N is that it uses a bag-of-word method to encode question and evidence such that higher order information like word order is absent to the model. We think its performance can be improved by designing more complex encoding methods BIBREF26 and leave it as a future work. The Attentive and Impatient Readers only have access to the fixed length representations when doing classification. However, our model has access to the outputs of all the time steps of the evidence LSTMs, and scores the label sequence as a whole. Therefore, our model achieves better performance. Evaluation on the Entire WebQA Dataset In this section, we evaluate our model on the entire WebQA dataset. The evaluation results are shown in Table TABREF24 . Although producing multi-word answers is harder, our model achieves comparable results with the one-word answer subset (Table TABREF23 ), demonstrating that our model is effective for both single-word and multi-word word settings. “Softmax” in Table TABREF24 means we replace CRF with INLINEFORM0 , i.e. replace Eq. ( EQREF19 ) with DISPLAYFORM0 CRF outperforms INLINEFORM0 significantly in all cases. The reason is that INLINEFORM1 predicts each label independently, suggesting that modeling label transition explicitly is essential for improving performance. A natural choice for modeling label transition in INLINEFORM2 is to take the last prediction into account as in BIBREF27 . The result is shown in Table TABREF24 as “Softmax( INLINEFORM3 -1)”. However, its performance is only comparable with “Softmax” and significantly lower than CRF. The reason is that we can enumerate all possible label sequences implicitly by dynamic programming for CRF during predicting but this is not possible for “Softmax( INLINEFORM4 -1)” , which indicates CRF is a better choice. “Noise” in Table TABREF24 means whether we inject noise data or not (Section SECREF34 ). As all evidences are positive under the annotated evidence setting, the ability for recognizing unreliable evidence will be useless. Therefore, the performance of our model with and without noise is comparable under the annotated evidence setting. However, the ability is important to improve the performance under the retrieved evidence setting because a large amount of the retrieved evidences are negative ones. As a result, we observe significant improvement by injecting noise data for this setting. Effect of Word Embedding As stated in Section SECREF34 , the word embedding INLINEFORM0 is initialized with LM embedding and kept fixed in training. We evaluate different initialization and optimization methods in this section. The evaluation results are shown in Table TABREF40 . The second row shows the results when the embedding is optimized jointly during training. The performance drops significantly. Detailed analysis reveals that the trainable embedding enlarge trainable parameter number and the model gets over fitting easily. The model acts like a context independent entity tagger to some extend, which is not desired. For example, the model will try to find any location name in the evidence when the word “在哪 (where)” occurs in the question. In contrary, pre-trained fixed embedding forces the model to pay more attention to the latent syntactic regularities. And it also carries basic priors such as “梨 (pear)” is fruit and “李世石 (Lee Sedol)” is a person, thus the model will generalize better to test data with fixed embedding. The third row shows the result when the embedding is randomly initialized and jointly optimized. The performance drops significantly further, suggesting that pre-trained embedding indeed carries meaningful priors. Effect of q-e.comm and e-e.comm Features As shown in Table TABREF41 , both the q-e.comm and e-e.comm features are effective, and the q-e.comm feature contributes more to the overall performance. The reason is that the interaction between question and evidence is limited and q-e.comm feature with value 1, i.e. the corresponding word also occurs in the question, is a strong indication that the word may not be part of the answer. Effect of Question Representations In this section, we compare the single-time attention method for computing INLINEFORM0 ( INLINEFORM1 , Eq. ( EQREF12 , EQREF13 )) with two widely used options: element-wise max operation INLINEFORM2 : INLINEFORM3 and element-wise average operation INLINEFORM4 : INLINEFORM5 . Intuitively, INLINEFORM6 can distill information in a more flexible way from { INLINEFORM7 }, while INLINEFORM8 tends to hide the differences between them, and INLINEFORM9 lies between INLINEFORM10 and INLINEFORM11 . The results in Table TABREF41 suggest that the more flexible and selective the operation is, the better the performance is. Effect of Evidence LSTMs Structures We investigate the effect of evidence LSTMs layer number, layer width and cross layer links in this section. The results are shown in Figure TABREF46 . For fair comparison, we do not use cross layer links in Figure TABREF46 (a) (dotted lines in Figure FIGREF4 ), and highlight the results with cross layer links (layer width 64) with circle and square for retrieved and annotated evidence settings respectively. We can conclude that: (1) generally the deeper and wider the model is, the better the performance is; (2) cross layer links are effective as they make the third evidence LSTM layer see information in both directions. Word-based v.s. Character-based Input Our model achieves fuzzy matching F1 scores of 69.78% and 70.97% on character-based input in annotated and retrieved evidence settings respectively (Table TABREF46 ), which are only 3.72 and 3.72 points lower than the corresponding scores on word-based input respectively. The performance is promising, demonstrating that our model is robust and effective. Conclusion and Future Work In this work, we build a new human annotated real-world QA dataset WebQA for developing and evaluating QA system on real-world QA data. We also propose a new end-to-end recurrent sequence labeling model for QA. Experimental results show that our model outperforms baselines significantly. There are several future directions we plan to pursue. First, multi-entity factoid and non-factoid QA are also interesting topics. Second, we plan to extend our model to multi-evidence cases. Finally, inspired by Residual Network BIBREF28 , we will investigate deeper and wider models in the future.
Unanswerable
80de3baf97a55ea33e0fe0cafa6f6221ba347d0a
80de3baf97a55ea33e0fe0cafa6f6221ba347d0a_0
Q: Are resolution mode variables hand crafted? Text: Introduction Entity coreference resolution has become a critical component for many Natural Language Processing (NLP) tasks. Systems requiring deep language understanding, such as information extraction BIBREF2 , semantic event learning BIBREF3 , BIBREF4 , and named entity linking BIBREF5 , BIBREF6 all benefit from entity coreference information. Entity coreference resolution is the task of identifying mentions (i.e., noun phrases) in a text or dialogue that refer to the same real-world entities. In recent years, several supervised entity coreference resolution systems have been proposed, which, according to ng:2010:ACL, can be categorized into three classes — mention-pair models BIBREF7 , entity-mention models BIBREF8 , BIBREF9 , BIBREF10 and ranking models BIBREF11 , BIBREF12 , BIBREF13 — among which ranking models recently obtained state-of-the-art performance. However, the manually annotated corpora that these systems rely on are highly expensive to create, in particular when we want to build data for resource-poor languages BIBREF14 . That makes unsupervised approaches, which only require unannotated text for training, a desirable solution to this problem. Several unsupervised learning algorithms have been applied to coreference resolution. haghighi-klein:2007:ACLMain presented a mention-pair nonparametric fully-generative Bayesian model for unsupervised coreference resolution. Based on this model, ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. poon-domingos:2008:EMNLP proposed an entity-mention model that is able to perform joint inference across mentions by using Markov Logic. Unfortunately, these unsupervised systems' performance on accuracy significantly falls behind those of supervised systems, and are even worse than the deterministic rule-based systems. Furthermore, there is no previous work exploring the possibility of developing an unsupervised ranking model which achieved state-of-the-art performance under supervised settings for entity coreference resolution. In this paper, we propose an unsupervised generative ranking model for entity coreference resolution. Our experimental results on the English data from the CoNLL-2012 shared task BIBREF0 show that our unsupervised system outperforms the Stanford deterministic system BIBREF1 by 3.01% absolute on the CoNLL official metric. The contributions of this work are (i) proposing the first unsupervised ranking model for entity coreference resolution. (ii) giving empirical evaluations of this model on benchmark data sets. (iii) considerably narrowing the gap to supervised coreference resolution accuracy. Notations and Definitions In the following, $D = \lbrace m_0, m_1, \ldots , m_n\rbrace $ represents a generic input document which is a sequence of coreference mentions, including the artificial root mention (denoted by $m_0$ ). The method to detect and extract these mentions is discussed later in Section "Mention Detection" . Let $C = \lbrace c_1, c_2, \ldots , c_n\rbrace $ denote the coreference assignment of a given document, where each mention $m_i$ has an associated random variable $c_i$ taking values in the set $\lbrace 0, i, \ldots , i-1\rbrace $ ; this variable specifies $m_i$ 's selected antecedent ( $c_i \in \lbrace 1, 2, \ldots , i-1\rbrace $ ), or indicates that it begins a new coreference chain ( $c_i = 0$ ). Generative Ranking Model The following is a straightforward way to build a generative model for coreference: $$\begin{array}{rcl} P(D, C) & = & P(D|C)P(C) \\ & = & \prod \limits _{j=1}^{n}P(m_j|m_{c_j})\prod \limits _{j=1}^{n}P(c_j|j) \end{array}$$ (Eq. 3) where we factorize the probabilities $P(D|C)$ and $P(C)$ into each position $j$ by adopting appropriate independence assumptions that given the coreference assignment $c_j$ and corresponding coreferent mention $m_{c_j}$ , the mention $m_j$ is independent with other mentions in front of it. This independent assumption is similar to that in the IBM 1 model on machine translation BIBREF15 , where it assumes that given the corresponding English word, the aligned foreign word is independent with other English and foreign words. We do not make any independent assumptions among different features (see Section "Features" for details). Inference in this model is efficient, because we can compute $c_j$ separately for each mention: $ c^*_j = \operatornamewithlimits{argmax}\limits _{c_j} P(m_j|m_{c_j}) P(c_j|j) $ The model is a so-called ranking model because it is able to identify the most probable candidate antecedent given a mention to be resolved. Resolution Mode Variables According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\Pi = \lbrace \pi _1, \ldots , \pi _n\rbrace $ , where for each mention $j$ the variable $\pi _j \in \lbrace str, prec, attr\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\Pi $ is deterministic when $D$ is given (i.e. $P(\Pi |D)$ is a point distribution). We determine $\pi _j$ for each mention $m_j$ in the following way: $\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 . $\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve. $\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions. Now, we can extend the generative model in Eq. 3 to: $ \begin{array}{rcl} & & P(D, C) = P(D, C, \Pi ) \\ & = & \prod \limits _{j=1}^{n}P(m_j|m_{c_j}, \pi _j) P(c_j|\pi _j, j) P(\pi _j|j) \end{array} $ where we define $P(\pi _j|j)$ to be uniform distribution. We model $P(m_j|m_{c_j}, \pi _j)$ and $P(c_j|\pi _j, j)$ in the following way: $ \begin{array}{l} P(m_j|m_{c_j}, \pi _j) = t(m_j|m_{c_j}, \pi _j) \\ P(c_j|\pi _j, j) = \left\lbrace \begin{array}{ll} q(c_j|\pi _j, j) & \pi _j = attr \\ \frac{1}{j} & \textrm {otherwise} \end{array}\right. \end{array} $ where $\theta = \lbrace t, q\rbrace $ are parameters of our model. Note that in the attribute-matching mode ( $\pi _j = attr$ ) we model $P(c_j|\pi _j, j)$ with parameter $q$ , while in the other two modes, we use the uniform distribution. It makes sense because the position information is important for coreference resolved by matching attributes of two mentions such as resolving pronoun coreference, but not that important for those resolved by matching text or special relations like two mentions referring the same person and matching by the name. [t] Learning Model with EM Initialization: Initialize $\theta _0 = \lbrace t_0, q_0\rbrace $ $t=0$ to $T$ set all counts $c(\ldots ) = 0$ each document $D$ $j=1$ to $n$ $k=0$ to $j - 1$ $L_{jk} = \frac{t(m_j|m_k,\pi _j)q(k|\pi _j, j)}{\sum \limits _{i = 0}^{j-1} t(m_j|m_i,\pi _j)q(i|\pi _j, j)}$ $c(m_j, m_k, \pi _j) \mathrel {+}= L_{jk}$ $c(m_k, \pi _j) \mathrel {+}= L_{jk}$ $c(k, j, \pi _j) \mathrel {+}= L_{jk}$ $c(j, \pi _j) \mathrel {+}= L_{jk}$ Recalculate the parameters $t(m|m^{\prime }, \pi ) = \frac{c(m, m^{\prime }, \pi )}{c(m^{\prime }, \pi )}$ $q(k, j, \pi ) = \frac{c(k, j, \pi )}{c(j, \pi )}$ Features In this section, we describe the features we use to represent mentions. Specifically, as shown in Table 1 , we use different features under different resolution modes. It should be noted that only the Distance feature is designed for parameter $q$ , all other features are designed for parameter $t$ . Model Learning For model learning, we run EM algorithm BIBREF19 on our Model, treating $D$ as observed data and $C$ as latent variables. We run EM with 10 iterations and select the parameters achieving the best performance on the development data. Each iteration takes around 12 hours with 10 CPUs parallelly. The best parameters appear at around the 5th iteration, according to our experiments.The detailed derivation of the learning algorithm is shown in Appendix A. The pseudo-code is shown is Algorithm "Resolution Mode Variables" . We use uniform initialization for all the parameters in our model. Several previous work has attempted to use EM for entity coreference resolution. cherry-bergsma:2005 and charniak-elsner:2009 applied EM for pronoun anaphora resolution. ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. Recently, moosavi2014 proposed an unsupervised model utilizing the most informative relations and achieved competitive performance with the Stanford system. Mention Detection The basic rules we used to detect mentions are similar to those of Lee:2013:CL, except that their system uses a set of filtering rules designed to discard instances of pleonastic it, partitives, certain quantified noun phrases and other spurious mentions. Our system keeps partitives, quantified noun phrases and bare NP mentions, but discards pleonastic it and other spurious mentions. Experimental Setup Datasets. Due to the availability of readily parsed data, we select the APW and NYT sections of Gigaword Corpus (years 1994-2010) BIBREF20 to train the model. Following previous work BIBREF3 , we remove duplicated documents and the documents which include fewer than 3 sentences. The development and test data are the English data from the CoNLL-2012 shared task BIBREF0 , which is derived from the OntoNotes corpus BIBREF21 . The corpora statistics are shown in Table 2 . Our system is evaluated with automatically extracted mentions on the version of the data with automatic preprocessing information (e.g., predicted parse trees). Evaluation Metrics. We evaluate our model on three measures widely used in the literature: MUC BIBREF22 , B $^{3}$ BIBREF23 , and Entity-based CEAF (CEAF $_e$ ) BIBREF24 . In addition, we also report results on another two popular metrics: Mention-based CEAF (CEAF $_m$ ) and BLANC BIBREF25 . All the results are given by the latest version of CoNLL-2012 scorer Results and Comparison Table 3 illustrates the results of our model together as baseline with two deterministic systems, namely Stanford: the Stanford system BIBREF10 and Multigraph: the unsupervised multigraph system BIBREF26 , and one unsupervised system, namely MIR: the unsupervised system using most informative relations BIBREF27 . Our model outperforms the three baseline systems on all the evaluation metrics. Specifically, our model achieves improvements of 2.93% and 3.01% on CoNLL F1 score over the Stanford system, the winner of the CoNLL 2011 shared task, on the CoNLL 2012 development and test sets, respectively. The improvements on CoNLL F1 score over the Multigraph model are 1.41% and 1.77% on the development and test sets, respectively. Comparing with the MIR model, we obtain significant improvements of 2.62% and 3.02% on CoNLL F1 score. To make a thorough empirical comparison with previous studies, Table 3 (below the dashed line) also shows the results of some state-of-the-art supervised coreference resolution systems — IMS: the second best system in the CoNLL 2012 shared task BIBREF28 ; Latent-Tree: the latent tree model BIBREF29 obtaining the best results in the shared task; Berkeley: the Berkeley system with the final feature set BIBREF12 ; LaSO: the structured perceptron system with non-local features BIBREF30 ; Latent-Strc: the latent structure system BIBREF31 ; Model-Stack: the entity-centric system with model stacking BIBREF32 ; and Non-Linear: the non-linear mention-ranking model with feature representations BIBREF33 . Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 . Conclusion We proposed a new generative, unsupervised ranking model for entity coreference resolution into which we introduced resolution mode variables to distinguish mentions resolved by different categories of information. Experimental results on the data from CoNLL-2012 shared task show that our system significantly improves the accuracy on different evaluation metrics over the baseline systems. One possible direction for future work is to differentiate more resolution modes. Another one is to add more precise or even event-based features to improve the model's performance. Acknowledgements This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. Appendix A. Derivation of Model Learning Formally, we iteratively estimate the model parameters $\theta $ , employing the following EM algorithm: For simplicity, we denote: $ {\small \begin{array}{rcl} P(C|D; \theta ) & = & \tilde{P}(C|D) \\ P(C|D; \theta ^{\prime }) & = & P(C|D) \end{array}} $ In addition, we use $\tau (\pi _j|j)$ to denote the probability $P(\pi _j|j)$ which is uniform distribution in our model. Moreover, we use the following notation for convenience: $ {\small \theta (m_j, m_k, j, k, \pi _j) = t(m_j|m_k, \pi _j) q(k|\pi _j, j) \tau (\pi _j|j) } $ Then, we have $ {\scriptsize { \begin{array}{rl} & E_{\tilde{P}(c|D)} [\log P(D, C)] \\ = & \sum \limits _{C} \tilde{P}(C|D) \log P(D, C) \\ = & \sum \limits _{C} \tilde{P}(C|D) \big (\sum \limits _{j=1}^{n} \log t(m_j|m_{c_j}, \pi _j) + \log q(c_j|\pi _j, j) + \log \tau (\pi _j|j) \big ) \\ = & \sum \limits _{j=1}^{n} \sum \limits _{k=0}^{j-1} L_{jk} \big (\log t(m_j|m_k, \pi _j) + \log q(k|\pi _j, j) + \log \tau (\pi _j|j) \big ) \end{array}}} $ Then the parameters $t$ and $q$ that maximize $E_{\tilde{P}(c|D)} [\log P(D, C)]$ satisfy that $ {\small \begin{array}{rcl} t(m_j|m_k, \pi _j) & = & \frac{L_{jk}}{\sum \limits _{i = 1}^{n} L_{ik}} \\ q(k|\pi _j, j) & = & \frac{L_{jk}}{\sum \limits _{i = 0}^{j-1} L_{ji}} \end{array}} $ where $L_{jk}$ can be calculated by $ {\small \begin{array}{rcl} L_{jk} & = & \sum \limits _{C, c_j=k} \tilde{P}(C|D) = \frac{\sum \limits _{C, c_j=k} \tilde{P}(C, D)}{\sum \limits _{C} \tilde{P}(C, D)} \\ & = & \frac{\sum \limits _{C, c_j=k}\prod \limits _{i = 1}^{n}\tilde{\theta }(m_i, m_{c_i}, c_i, i, \pi _i)}{\sum \limits _{C}\prod \limits _{i = 1}^{n}\tilde{\theta }(m_i, m_{c_i}, c_i, i, \pi _i)} \\ & = & \frac{\tilde{\theta }(m_j, m_k, k, j, \pi _j)\sum \limits _{C(-j)}\tilde{P}(C(-j)|D)}{\sum \limits _{i=0}^{j-1}\tilde{\theta }(m_j, m_i, i, j, \pi _j)\sum \limits _{C(-j)}\tilde{P}(C(-j)|D)} \\ & = & \frac{\tilde{\theta }(m_j, m_k, k, j, \pi _j)}{\sum \limits _{i=0}^{j-1}\tilde{\theta }(m_j, m_i, i, j, \pi _j)} \\ & = & \frac{\tilde{t}(m_j|m_k, \pi _j) \tilde{q}(k|\pi _j, j) \tilde{\tau }(\pi _j|j)}{\sum \limits _{i=0}^{j-1}\tilde{t}(m_j|m_i, \pi _j) \tilde{q}(i|\pi _j, j) \tilde{\tau }(\pi _j|j)} \\ & = & \frac{\tilde{t}(m_j|m_k, \pi _j) \tilde{q}(k|\pi _j, j)}{\sum \limits _{i=0}^{j-1}\tilde{t}(m_j|m_i, \pi _j) \tilde{q}(i|\pi _j, j)} \end{array}} $ where $C(-j) = \lbrace c_1, \ldots , c_{j-1}, c_{j+1}, \ldots , c_{n}\rbrace $ . The above derivations correspond to the learning algorithm in Algorithm "Resolution Mode Variables" .
No
f5707610dc8ae2a3dc23aec63d4afa4b40b7ec1e
f5707610dc8ae2a3dc23aec63d4afa4b40b7ec1e_0
Q: What are resolution model variables? Text: Introduction Entity coreference resolution has become a critical component for many Natural Language Processing (NLP) tasks. Systems requiring deep language understanding, such as information extraction BIBREF2 , semantic event learning BIBREF3 , BIBREF4 , and named entity linking BIBREF5 , BIBREF6 all benefit from entity coreference information. Entity coreference resolution is the task of identifying mentions (i.e., noun phrases) in a text or dialogue that refer to the same real-world entities. In recent years, several supervised entity coreference resolution systems have been proposed, which, according to ng:2010:ACL, can be categorized into three classes — mention-pair models BIBREF7 , entity-mention models BIBREF8 , BIBREF9 , BIBREF10 and ranking models BIBREF11 , BIBREF12 , BIBREF13 — among which ranking models recently obtained state-of-the-art performance. However, the manually annotated corpora that these systems rely on are highly expensive to create, in particular when we want to build data for resource-poor languages BIBREF14 . That makes unsupervised approaches, which only require unannotated text for training, a desirable solution to this problem. Several unsupervised learning algorithms have been applied to coreference resolution. haghighi-klein:2007:ACLMain presented a mention-pair nonparametric fully-generative Bayesian model for unsupervised coreference resolution. Based on this model, ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. poon-domingos:2008:EMNLP proposed an entity-mention model that is able to perform joint inference across mentions by using Markov Logic. Unfortunately, these unsupervised systems' performance on accuracy significantly falls behind those of supervised systems, and are even worse than the deterministic rule-based systems. Furthermore, there is no previous work exploring the possibility of developing an unsupervised ranking model which achieved state-of-the-art performance under supervised settings for entity coreference resolution. In this paper, we propose an unsupervised generative ranking model for entity coreference resolution. Our experimental results on the English data from the CoNLL-2012 shared task BIBREF0 show that our unsupervised system outperforms the Stanford deterministic system BIBREF1 by 3.01% absolute on the CoNLL official metric. The contributions of this work are (i) proposing the first unsupervised ranking model for entity coreference resolution. (ii) giving empirical evaluations of this model on benchmark data sets. (iii) considerably narrowing the gap to supervised coreference resolution accuracy. Notations and Definitions In the following, $D = \lbrace m_0, m_1, \ldots , m_n\rbrace $ represents a generic input document which is a sequence of coreference mentions, including the artificial root mention (denoted by $m_0$ ). The method to detect and extract these mentions is discussed later in Section "Mention Detection" . Let $C = \lbrace c_1, c_2, \ldots , c_n\rbrace $ denote the coreference assignment of a given document, where each mention $m_i$ has an associated random variable $c_i$ taking values in the set $\lbrace 0, i, \ldots , i-1\rbrace $ ; this variable specifies $m_i$ 's selected antecedent ( $c_i \in \lbrace 1, 2, \ldots , i-1\rbrace $ ), or indicates that it begins a new coreference chain ( $c_i = 0$ ). Generative Ranking Model The following is a straightforward way to build a generative model for coreference: $$\begin{array}{rcl} P(D, C) & = & P(D|C)P(C) \\ & = & \prod \limits _{j=1}^{n}P(m_j|m_{c_j})\prod \limits _{j=1}^{n}P(c_j|j) \end{array}$$ (Eq. 3) where we factorize the probabilities $P(D|C)$ and $P(C)$ into each position $j$ by adopting appropriate independence assumptions that given the coreference assignment $c_j$ and corresponding coreferent mention $m_{c_j}$ , the mention $m_j$ is independent with other mentions in front of it. This independent assumption is similar to that in the IBM 1 model on machine translation BIBREF15 , where it assumes that given the corresponding English word, the aligned foreign word is independent with other English and foreign words. We do not make any independent assumptions among different features (see Section "Features" for details). Inference in this model is efficient, because we can compute $c_j$ separately for each mention: $ c^*_j = \operatornamewithlimits{argmax}\limits _{c_j} P(m_j|m_{c_j}) P(c_j|j) $ The model is a so-called ranking model because it is able to identify the most probable candidate antecedent given a mention to be resolved. Resolution Mode Variables According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\Pi = \lbrace \pi _1, \ldots , \pi _n\rbrace $ , where for each mention $j$ the variable $\pi _j \in \lbrace str, prec, attr\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\Pi $ is deterministic when $D$ is given (i.e. $P(\Pi |D)$ is a point distribution). We determine $\pi _j$ for each mention $m_j$ in the following way: $\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 . $\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve. $\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions. Now, we can extend the generative model in Eq. 3 to: $ \begin{array}{rcl} & & P(D, C) = P(D, C, \Pi ) \\ & = & \prod \limits _{j=1}^{n}P(m_j|m_{c_j}, \pi _j) P(c_j|\pi _j, j) P(\pi _j|j) \end{array} $ where we define $P(\pi _j|j)$ to be uniform distribution. We model $P(m_j|m_{c_j}, \pi _j)$ and $P(c_j|\pi _j, j)$ in the following way: $ \begin{array}{l} P(m_j|m_{c_j}, \pi _j) = t(m_j|m_{c_j}, \pi _j) \\ P(c_j|\pi _j, j) = \left\lbrace \begin{array}{ll} q(c_j|\pi _j, j) & \pi _j = attr \\ \frac{1}{j} & \textrm {otherwise} \end{array}\right. \end{array} $ where $\theta = \lbrace t, q\rbrace $ are parameters of our model. Note that in the attribute-matching mode ( $\pi _j = attr$ ) we model $P(c_j|\pi _j, j)$ with parameter $q$ , while in the other two modes, we use the uniform distribution. It makes sense because the position information is important for coreference resolved by matching attributes of two mentions such as resolving pronoun coreference, but not that important for those resolved by matching text or special relations like two mentions referring the same person and matching by the name. [t] Learning Model with EM Initialization: Initialize $\theta _0 = \lbrace t_0, q_0\rbrace $ $t=0$ to $T$ set all counts $c(\ldots ) = 0$ each document $D$ $j=1$ to $n$ $k=0$ to $j - 1$ $L_{jk} = \frac{t(m_j|m_k,\pi _j)q(k|\pi _j, j)}{\sum \limits _{i = 0}^{j-1} t(m_j|m_i,\pi _j)q(i|\pi _j, j)}$ $c(m_j, m_k, \pi _j) \mathrel {+}= L_{jk}$ $c(m_k, \pi _j) \mathrel {+}= L_{jk}$ $c(k, j, \pi _j) \mathrel {+}= L_{jk}$ $c(j, \pi _j) \mathrel {+}= L_{jk}$ Recalculate the parameters $t(m|m^{\prime }, \pi ) = \frac{c(m, m^{\prime }, \pi )}{c(m^{\prime }, \pi )}$ $q(k, j, \pi ) = \frac{c(k, j, \pi )}{c(j, \pi )}$ Features In this section, we describe the features we use to represent mentions. Specifically, as shown in Table 1 , we use different features under different resolution modes. It should be noted that only the Distance feature is designed for parameter $q$ , all other features are designed for parameter $t$ . Model Learning For model learning, we run EM algorithm BIBREF19 on our Model, treating $D$ as observed data and $C$ as latent variables. We run EM with 10 iterations and select the parameters achieving the best performance on the development data. Each iteration takes around 12 hours with 10 CPUs parallelly. The best parameters appear at around the 5th iteration, according to our experiments.The detailed derivation of the learning algorithm is shown in Appendix A. The pseudo-code is shown is Algorithm "Resolution Mode Variables" . We use uniform initialization for all the parameters in our model. Several previous work has attempted to use EM for entity coreference resolution. cherry-bergsma:2005 and charniak-elsner:2009 applied EM for pronoun anaphora resolution. ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. Recently, moosavi2014 proposed an unsupervised model utilizing the most informative relations and achieved competitive performance with the Stanford system. Mention Detection The basic rules we used to detect mentions are similar to those of Lee:2013:CL, except that their system uses a set of filtering rules designed to discard instances of pleonastic it, partitives, certain quantified noun phrases and other spurious mentions. Our system keeps partitives, quantified noun phrases and bare NP mentions, but discards pleonastic it and other spurious mentions. Experimental Setup Datasets. Due to the availability of readily parsed data, we select the APW and NYT sections of Gigaword Corpus (years 1994-2010) BIBREF20 to train the model. Following previous work BIBREF3 , we remove duplicated documents and the documents which include fewer than 3 sentences. The development and test data are the English data from the CoNLL-2012 shared task BIBREF0 , which is derived from the OntoNotes corpus BIBREF21 . The corpora statistics are shown in Table 2 . Our system is evaluated with automatically extracted mentions on the version of the data with automatic preprocessing information (e.g., predicted parse trees). Evaluation Metrics. We evaluate our model on three measures widely used in the literature: MUC BIBREF22 , B $^{3}$ BIBREF23 , and Entity-based CEAF (CEAF $_e$ ) BIBREF24 . In addition, we also report results on another two popular metrics: Mention-based CEAF (CEAF $_m$ ) and BLANC BIBREF25 . All the results are given by the latest version of CoNLL-2012 scorer Results and Comparison Table 3 illustrates the results of our model together as baseline with two deterministic systems, namely Stanford: the Stanford system BIBREF10 and Multigraph: the unsupervised multigraph system BIBREF26 , and one unsupervised system, namely MIR: the unsupervised system using most informative relations BIBREF27 . Our model outperforms the three baseline systems on all the evaluation metrics. Specifically, our model achieves improvements of 2.93% and 3.01% on CoNLL F1 score over the Stanford system, the winner of the CoNLL 2011 shared task, on the CoNLL 2012 development and test sets, respectively. The improvements on CoNLL F1 score over the Multigraph model are 1.41% and 1.77% on the development and test sets, respectively. Comparing with the MIR model, we obtain significant improvements of 2.62% and 3.02% on CoNLL F1 score. To make a thorough empirical comparison with previous studies, Table 3 (below the dashed line) also shows the results of some state-of-the-art supervised coreference resolution systems — IMS: the second best system in the CoNLL 2012 shared task BIBREF28 ; Latent-Tree: the latent tree model BIBREF29 obtaining the best results in the shared task; Berkeley: the Berkeley system with the final feature set BIBREF12 ; LaSO: the structured perceptron system with non-local features BIBREF30 ; Latent-Strc: the latent structure system BIBREF31 ; Model-Stack: the entity-centric system with model stacking BIBREF32 ; and Non-Linear: the non-linear mention-ranking model with feature representations BIBREF33 . Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 . Conclusion We proposed a new generative, unsupervised ranking model for entity coreference resolution into which we introduced resolution mode variables to distinguish mentions resolved by different categories of information. Experimental results on the data from CoNLL-2012 shared task show that our system significantly improves the accuracy on different evaluation metrics over the baseline systems. One possible direction for future work is to differentiate more resolution modes. Another one is to add more precise or even event-based features to improve the model's performance. Acknowledgements This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. Appendix A. Derivation of Model Learning Formally, we iteratively estimate the model parameters $\theta $ , employing the following EM algorithm: For simplicity, we denote: $ {\small \begin{array}{rcl} P(C|D; \theta ) & = & \tilde{P}(C|D) \\ P(C|D; \theta ^{\prime }) & = & P(C|D) \end{array}} $ In addition, we use $\tau (\pi _j|j)$ to denote the probability $P(\pi _j|j)$ which is uniform distribution in our model. Moreover, we use the following notation for convenience: $ {\small \theta (m_j, m_k, j, k, \pi _j) = t(m_j|m_k, \pi _j) q(k|\pi _j, j) \tau (\pi _j|j) } $ Then, we have $ {\scriptsize { \begin{array}{rl} & E_{\tilde{P}(c|D)} [\log P(D, C)] \\ = & \sum \limits _{C} \tilde{P}(C|D) \log P(D, C) \\ = & \sum \limits _{C} \tilde{P}(C|D) \big (\sum \limits _{j=1}^{n} \log t(m_j|m_{c_j}, \pi _j) + \log q(c_j|\pi _j, j) + \log \tau (\pi _j|j) \big ) \\ = & \sum \limits _{j=1}^{n} \sum \limits _{k=0}^{j-1} L_{jk} \big (\log t(m_j|m_k, \pi _j) + \log q(k|\pi _j, j) + \log \tau (\pi _j|j) \big ) \end{array}}} $ Then the parameters $t$ and $q$ that maximize $E_{\tilde{P}(c|D)} [\log P(D, C)]$ satisfy that $ {\small \begin{array}{rcl} t(m_j|m_k, \pi _j) & = & \frac{L_{jk}}{\sum \limits _{i = 1}^{n} L_{ik}} \\ q(k|\pi _j, j) & = & \frac{L_{jk}}{\sum \limits _{i = 0}^{j-1} L_{ji}} \end{array}} $ where $L_{jk}$ can be calculated by $ {\small \begin{array}{rcl} L_{jk} & = & \sum \limits _{C, c_j=k} \tilde{P}(C|D) = \frac{\sum \limits _{C, c_j=k} \tilde{P}(C, D)}{\sum \limits _{C} \tilde{P}(C, D)} \\ & = & \frac{\sum \limits _{C, c_j=k}\prod \limits _{i = 1}^{n}\tilde{\theta }(m_i, m_{c_i}, c_i, i, \pi _i)}{\sum \limits _{C}\prod \limits _{i = 1}^{n}\tilde{\theta }(m_i, m_{c_i}, c_i, i, \pi _i)} \\ & = & \frac{\tilde{\theta }(m_j, m_k, k, j, \pi _j)\sum \limits _{C(-j)}\tilde{P}(C(-j)|D)}{\sum \limits _{i=0}^{j-1}\tilde{\theta }(m_j, m_i, i, j, \pi _j)\sum \limits _{C(-j)}\tilde{P}(C(-j)|D)} \\ & = & \frac{\tilde{\theta }(m_j, m_k, k, j, \pi _j)}{\sum \limits _{i=0}^{j-1}\tilde{\theta }(m_j, m_i, i, j, \pi _j)} \\ & = & \frac{\tilde{t}(m_j|m_k, \pi _j) \tilde{q}(k|\pi _j, j) \tilde{\tau }(\pi _j|j)}{\sum \limits _{i=0}^{j-1}\tilde{t}(m_j|m_i, \pi _j) \tilde{q}(i|\pi _j, j) \tilde{\tau }(\pi _j|j)} \\ & = & \frac{\tilde{t}(m_j|m_k, \pi _j) \tilde{q}(k|\pi _j, j)}{\sum \limits _{i=0}^{j-1}\tilde{t}(m_j|m_i, \pi _j) \tilde{q}(i|\pi _j, j)} \end{array}} $ where $C(-j) = \lbrace c_1, \ldots , c_{j-1}, c_{j+1}, \ldots , c_{n}\rbrace $ . The above derivations correspond to the learning algorithm in Algorithm "Resolution Mode Variables" .
Variables in the set {str, prec, attr} indicating in which mode the mention should be resolved.
e76139c63da0f861c097466983fbe0c94d1d9810
e76139c63da0f861c097466983fbe0c94d1d9810_0
Q: Is the model presented in the paper state of the art? Text: Introduction Entity coreference resolution has become a critical component for many Natural Language Processing (NLP) tasks. Systems requiring deep language understanding, such as information extraction BIBREF2 , semantic event learning BIBREF3 , BIBREF4 , and named entity linking BIBREF5 , BIBREF6 all benefit from entity coreference information. Entity coreference resolution is the task of identifying mentions (i.e., noun phrases) in a text or dialogue that refer to the same real-world entities. In recent years, several supervised entity coreference resolution systems have been proposed, which, according to ng:2010:ACL, can be categorized into three classes — mention-pair models BIBREF7 , entity-mention models BIBREF8 , BIBREF9 , BIBREF10 and ranking models BIBREF11 , BIBREF12 , BIBREF13 — among which ranking models recently obtained state-of-the-art performance. However, the manually annotated corpora that these systems rely on are highly expensive to create, in particular when we want to build data for resource-poor languages BIBREF14 . That makes unsupervised approaches, which only require unannotated text for training, a desirable solution to this problem. Several unsupervised learning algorithms have been applied to coreference resolution. haghighi-klein:2007:ACLMain presented a mention-pair nonparametric fully-generative Bayesian model for unsupervised coreference resolution. Based on this model, ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. poon-domingos:2008:EMNLP proposed an entity-mention model that is able to perform joint inference across mentions by using Markov Logic. Unfortunately, these unsupervised systems' performance on accuracy significantly falls behind those of supervised systems, and are even worse than the deterministic rule-based systems. Furthermore, there is no previous work exploring the possibility of developing an unsupervised ranking model which achieved state-of-the-art performance under supervised settings for entity coreference resolution. In this paper, we propose an unsupervised generative ranking model for entity coreference resolution. Our experimental results on the English data from the CoNLL-2012 shared task BIBREF0 show that our unsupervised system outperforms the Stanford deterministic system BIBREF1 by 3.01% absolute on the CoNLL official metric. The contributions of this work are (i) proposing the first unsupervised ranking model for entity coreference resolution. (ii) giving empirical evaluations of this model on benchmark data sets. (iii) considerably narrowing the gap to supervised coreference resolution accuracy. Notations and Definitions In the following, $D = \lbrace m_0, m_1, \ldots , m_n\rbrace $ represents a generic input document which is a sequence of coreference mentions, including the artificial root mention (denoted by $m_0$ ). The method to detect and extract these mentions is discussed later in Section "Mention Detection" . Let $C = \lbrace c_1, c_2, \ldots , c_n\rbrace $ denote the coreference assignment of a given document, where each mention $m_i$ has an associated random variable $c_i$ taking values in the set $\lbrace 0, i, \ldots , i-1\rbrace $ ; this variable specifies $m_i$ 's selected antecedent ( $c_i \in \lbrace 1, 2, \ldots , i-1\rbrace $ ), or indicates that it begins a new coreference chain ( $c_i = 0$ ). Generative Ranking Model The following is a straightforward way to build a generative model for coreference: $$\begin{array}{rcl} P(D, C) & = & P(D|C)P(C) \\ & = & \prod \limits _{j=1}^{n}P(m_j|m_{c_j})\prod \limits _{j=1}^{n}P(c_j|j) \end{array}$$ (Eq. 3) where we factorize the probabilities $P(D|C)$ and $P(C)$ into each position $j$ by adopting appropriate independence assumptions that given the coreference assignment $c_j$ and corresponding coreferent mention $m_{c_j}$ , the mention $m_j$ is independent with other mentions in front of it. This independent assumption is similar to that in the IBM 1 model on machine translation BIBREF15 , where it assumes that given the corresponding English word, the aligned foreign word is independent with other English and foreign words. We do not make any independent assumptions among different features (see Section "Features" for details). Inference in this model is efficient, because we can compute $c_j$ separately for each mention: $ c^*_j = \operatornamewithlimits{argmax}\limits _{c_j} P(m_j|m_{c_j}) P(c_j|j) $ The model is a so-called ranking model because it is able to identify the most probable candidate antecedent given a mention to be resolved. Resolution Mode Variables According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\Pi = \lbrace \pi _1, \ldots , \pi _n\rbrace $ , where for each mention $j$ the variable $\pi _j \in \lbrace str, prec, attr\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\Pi $ is deterministic when $D$ is given (i.e. $P(\Pi |D)$ is a point distribution). We determine $\pi _j$ for each mention $m_j$ in the following way: $\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 . $\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve. $\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions. Now, we can extend the generative model in Eq. 3 to: $ \begin{array}{rcl} & & P(D, C) = P(D, C, \Pi ) \\ & = & \prod \limits _{j=1}^{n}P(m_j|m_{c_j}, \pi _j) P(c_j|\pi _j, j) P(\pi _j|j) \end{array} $ where we define $P(\pi _j|j)$ to be uniform distribution. We model $P(m_j|m_{c_j}, \pi _j)$ and $P(c_j|\pi _j, j)$ in the following way: $ \begin{array}{l} P(m_j|m_{c_j}, \pi _j) = t(m_j|m_{c_j}, \pi _j) \\ P(c_j|\pi _j, j) = \left\lbrace \begin{array}{ll} q(c_j|\pi _j, j) & \pi _j = attr \\ \frac{1}{j} & \textrm {otherwise} \end{array}\right. \end{array} $ where $\theta = \lbrace t, q\rbrace $ are parameters of our model. Note that in the attribute-matching mode ( $\pi _j = attr$ ) we model $P(c_j|\pi _j, j)$ with parameter $q$ , while in the other two modes, we use the uniform distribution. It makes sense because the position information is important for coreference resolved by matching attributes of two mentions such as resolving pronoun coreference, but not that important for those resolved by matching text or special relations like two mentions referring the same person and matching by the name. [t] Learning Model with EM Initialization: Initialize $\theta _0 = \lbrace t_0, q_0\rbrace $ $t=0$ to $T$ set all counts $c(\ldots ) = 0$ each document $D$ $j=1$ to $n$ $k=0$ to $j - 1$ $L_{jk} = \frac{t(m_j|m_k,\pi _j)q(k|\pi _j, j)}{\sum \limits _{i = 0}^{j-1} t(m_j|m_i,\pi _j)q(i|\pi _j, j)}$ $c(m_j, m_k, \pi _j) \mathrel {+}= L_{jk}$ $c(m_k, \pi _j) \mathrel {+}= L_{jk}$ $c(k, j, \pi _j) \mathrel {+}= L_{jk}$ $c(j, \pi _j) \mathrel {+}= L_{jk}$ Recalculate the parameters $t(m|m^{\prime }, \pi ) = \frac{c(m, m^{\prime }, \pi )}{c(m^{\prime }, \pi )}$ $q(k, j, \pi ) = \frac{c(k, j, \pi )}{c(j, \pi )}$ Features In this section, we describe the features we use to represent mentions. Specifically, as shown in Table 1 , we use different features under different resolution modes. It should be noted that only the Distance feature is designed for parameter $q$ , all other features are designed for parameter $t$ . Model Learning For model learning, we run EM algorithm BIBREF19 on our Model, treating $D$ as observed data and $C$ as latent variables. We run EM with 10 iterations and select the parameters achieving the best performance on the development data. Each iteration takes around 12 hours with 10 CPUs parallelly. The best parameters appear at around the 5th iteration, according to our experiments.The detailed derivation of the learning algorithm is shown in Appendix A. The pseudo-code is shown is Algorithm "Resolution Mode Variables" . We use uniform initialization for all the parameters in our model. Several previous work has attempted to use EM for entity coreference resolution. cherry-bergsma:2005 and charniak-elsner:2009 applied EM for pronoun anaphora resolution. ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. Recently, moosavi2014 proposed an unsupervised model utilizing the most informative relations and achieved competitive performance with the Stanford system. Mention Detection The basic rules we used to detect mentions are similar to those of Lee:2013:CL, except that their system uses a set of filtering rules designed to discard instances of pleonastic it, partitives, certain quantified noun phrases and other spurious mentions. Our system keeps partitives, quantified noun phrases and bare NP mentions, but discards pleonastic it and other spurious mentions. Experimental Setup Datasets. Due to the availability of readily parsed data, we select the APW and NYT sections of Gigaword Corpus (years 1994-2010) BIBREF20 to train the model. Following previous work BIBREF3 , we remove duplicated documents and the documents which include fewer than 3 sentences. The development and test data are the English data from the CoNLL-2012 shared task BIBREF0 , which is derived from the OntoNotes corpus BIBREF21 . The corpora statistics are shown in Table 2 . Our system is evaluated with automatically extracted mentions on the version of the data with automatic preprocessing information (e.g., predicted parse trees). Evaluation Metrics. We evaluate our model on three measures widely used in the literature: MUC BIBREF22 , B $^{3}$ BIBREF23 , and Entity-based CEAF (CEAF $_e$ ) BIBREF24 . In addition, we also report results on another two popular metrics: Mention-based CEAF (CEAF $_m$ ) and BLANC BIBREF25 . All the results are given by the latest version of CoNLL-2012 scorer Results and Comparison Table 3 illustrates the results of our model together as baseline with two deterministic systems, namely Stanford: the Stanford system BIBREF10 and Multigraph: the unsupervised multigraph system BIBREF26 , and one unsupervised system, namely MIR: the unsupervised system using most informative relations BIBREF27 . Our model outperforms the three baseline systems on all the evaluation metrics. Specifically, our model achieves improvements of 2.93% and 3.01% on CoNLL F1 score over the Stanford system, the winner of the CoNLL 2011 shared task, on the CoNLL 2012 development and test sets, respectively. The improvements on CoNLL F1 score over the Multigraph model are 1.41% and 1.77% on the development and test sets, respectively. Comparing with the MIR model, we obtain significant improvements of 2.62% and 3.02% on CoNLL F1 score. To make a thorough empirical comparison with previous studies, Table 3 (below the dashed line) also shows the results of some state-of-the-art supervised coreference resolution systems — IMS: the second best system in the CoNLL 2012 shared task BIBREF28 ; Latent-Tree: the latent tree model BIBREF29 obtaining the best results in the shared task; Berkeley: the Berkeley system with the final feature set BIBREF12 ; LaSO: the structured perceptron system with non-local features BIBREF30 ; Latent-Strc: the latent structure system BIBREF31 ; Model-Stack: the entity-centric system with model stacking BIBREF32 ; and Non-Linear: the non-linear mention-ranking model with feature representations BIBREF33 . Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 . Conclusion We proposed a new generative, unsupervised ranking model for entity coreference resolution into which we introduced resolution mode variables to distinguish mentions resolved by different categories of information. Experimental results on the data from CoNLL-2012 shared task show that our system significantly improves the accuracy on different evaluation metrics over the baseline systems. One possible direction for future work is to differentiate more resolution modes. Another one is to add more precise or even event-based features to improve the model's performance. Acknowledgements This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. Appendix A. Derivation of Model Learning Formally, we iteratively estimate the model parameters $\theta $ , employing the following EM algorithm: For simplicity, we denote: $ {\small \begin{array}{rcl} P(C|D; \theta ) & = & \tilde{P}(C|D) \\ P(C|D; \theta ^{\prime }) & = & P(C|D) \end{array}} $ In addition, we use $\tau (\pi _j|j)$ to denote the probability $P(\pi _j|j)$ which is uniform distribution in our model. Moreover, we use the following notation for convenience: $ {\small \theta (m_j, m_k, j, k, \pi _j) = t(m_j|m_k, \pi _j) q(k|\pi _j, j) \tau (\pi _j|j) } $ Then, we have $ {\scriptsize { \begin{array}{rl} & E_{\tilde{P}(c|D)} [\log P(D, C)] \\ = & \sum \limits _{C} \tilde{P}(C|D) \log P(D, C) \\ = & \sum \limits _{C} \tilde{P}(C|D) \big (\sum \limits _{j=1}^{n} \log t(m_j|m_{c_j}, \pi _j) + \log q(c_j|\pi _j, j) + \log \tau (\pi _j|j) \big ) \\ = & \sum \limits _{j=1}^{n} \sum \limits _{k=0}^{j-1} L_{jk} \big (\log t(m_j|m_k, \pi _j) + \log q(k|\pi _j, j) + \log \tau (\pi _j|j) \big ) \end{array}}} $ Then the parameters $t$ and $q$ that maximize $E_{\tilde{P}(c|D)} [\log P(D, C)]$ satisfy that $ {\small \begin{array}{rcl} t(m_j|m_k, \pi _j) & = & \frac{L_{jk}}{\sum \limits _{i = 1}^{n} L_{ik}} \\ q(k|\pi _j, j) & = & \frac{L_{jk}}{\sum \limits _{i = 0}^{j-1} L_{ji}} \end{array}} $ where $L_{jk}$ can be calculated by $ {\small \begin{array}{rcl} L_{jk} & = & \sum \limits _{C, c_j=k} \tilde{P}(C|D) = \frac{\sum \limits _{C, c_j=k} \tilde{P}(C, D)}{\sum \limits _{C} \tilde{P}(C, D)} \\ & = & \frac{\sum \limits _{C, c_j=k}\prod \limits _{i = 1}^{n}\tilde{\theta }(m_i, m_{c_i}, c_i, i, \pi _i)}{\sum \limits _{C}\prod \limits _{i = 1}^{n}\tilde{\theta }(m_i, m_{c_i}, c_i, i, \pi _i)} \\ & = & \frac{\tilde{\theta }(m_j, m_k, k, j, \pi _j)\sum \limits _{C(-j)}\tilde{P}(C(-j)|D)}{\sum \limits _{i=0}^{j-1}\tilde{\theta }(m_j, m_i, i, j, \pi _j)\sum \limits _{C(-j)}\tilde{P}(C(-j)|D)} \\ & = & \frac{\tilde{\theta }(m_j, m_k, k, j, \pi _j)}{\sum \limits _{i=0}^{j-1}\tilde{\theta }(m_j, m_i, i, j, \pi _j)} \\ & = & \frac{\tilde{t}(m_j|m_k, \pi _j) \tilde{q}(k|\pi _j, j) \tilde{\tau }(\pi _j|j)}{\sum \limits _{i=0}^{j-1}\tilde{t}(m_j|m_i, \pi _j) \tilde{q}(i|\pi _j, j) \tilde{\tau }(\pi _j|j)} \\ & = & \frac{\tilde{t}(m_j|m_k, \pi _j) \tilde{q}(k|\pi _j, j)}{\sum \limits _{i=0}^{j-1}\tilde{t}(m_j|m_i, \pi _j) \tilde{q}(i|\pi _j, j)} \end{array}} $ where $C(-j) = \lbrace c_1, \ldots , c_{j-1}, c_{j+1}, \ldots , c_{n}\rbrace $ . The above derivations correspond to the learning algorithm in Algorithm "Resolution Mode Variables" .
No, supervised models perform better for this task.
b8b588ca1e876b3094ae561a875dd949c8965b2e
b8b588ca1e876b3094ae561a875dd949c8965b2e_0
Q: What problems are found with the evaluation scheme? Text: Introduction Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc. Although the blooming of human-computer dialogue technology in both academia and industry, how to evaluate a dialogue system, especially an open domain chit-chat system, is still an open question. Figure FIGREF6 presents a brief comparison of the open domain chit-chat system and the task-oriented dialogue system. From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue. To promote the development of the evaluation technology for dialogue systems, especially considering the language characteristics of Chinese, we organize the first evaluation of Chinese human-computer dialogue technology. In this paper, we will present the evaluation scheme and the released corpus in detail. The rest of this paper is as follows. In Section 2, we will briefly introduce the first evaluation of Chinese human-computer dialogue technology, which includes the descriptions and the evaluation metrics of the two tasks. We then present the evaluation data and final results in Section 3 and 4 respectively, following the conclusion and acknowledgements in the last two sections. The First Evaluation of Chinese Human-Computer Dialogue Technology The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue. Task 1: User Intent Classification In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information. In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance. It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric. Task 2: Online Testing of Task-oriented Dialogue For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following: “查询明天从哈尔滨到北京的晚间软卧火车票,上下铺均可。 Inquire the soft berth ticket at tomorrow evening, from Harbin to Beijing, either upper or lower berth is okay.” In task 2, there are three categories. They are “air tickets”, “train tickets” and “hotel”. Correspondingly, there are three type of tasks. All the tasks are in the scope of the three categories. However, a complete user intent may include more than one task. For example, a user may first inquiring the air tickets. However, due to the high price, the user decide to buy a train tickets. Furthermore, the user may also need to book a hotel room at the destination. We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following. Task completion ratio: The number of completed tasks divided by the number of total tasks. User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively. Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency. Number of dialogue turns: The number of utterances in a task-completed dialogue. Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide. For the number of dialogue turns, we have a penalty rule that for a dialogue task, if the system cannot return the result (or accomplish the task) in 30 turns, the dialogue task is end by force. Meanwhile, if a system cannot accomplish a task in less than 30 dialogue turns, the number of dialogue turns is set to 30. Evaluation Data In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation. For task 1, as the descriptions in Section SECREF10 , the two top categories are chit-chat (chat in Table TABREF13 ) and task-oriented dialogue. Meanwhile, the task-oriented dialogue also includes 30 sub categories. Actually, the task 1 is a 31 categories classification task. In task 1, besides the data we released for training and developing, we also allow the participants to extend the training and developing corpus. Hence, there are two sub tasks for the task 1. One is closed test, which means the participants can only use the released data for training and developing. The other is open test, which allows the participants to explore external corpus for training and developing. Note that there is a same test set for both the closed test and the open test. For task 2, we release 11 examples of the complete user intent and 3 data file, which includes about one month of flight, hotel and train information, for participants to build their dialogue systems. The current date for online test is set to April 18, 2017. If the tester says “today”, the systems developed by the participants should understand that he/she indicates the date of April 18, 2017. Evaluation Results There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper. Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2. Conclusion In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. In detail, we first present the two tasks of the evaluation as well as the evaluation metrics. We then describe the released data for evaluation. Finally, we also show the evaluation results of the two tasks. As the evaluation data is provided by the iFLYTEK Corporation from their real online applications, we believe that the released data will further promote the research of human-computer dialogue and fill the blank of the data on the two tasks. Acknowledgements We would like to thank the Social Media Processing (SMP) committee of Chinese Information Processing Society of China. We thank all the participants of the first evaluation of Chinese human-computer dialogue technology. We also thank the testers from the voice resource department of the iFLYTEK Corporation for their effort to the online real-time human-computer dialogue test and offline dialogue evaluation. We thank Lingzhi Li, Yangzi Zhang, Jiaqi Zhu and Xiaoming Shi from the research center for social computing and information retrieval for their support on the data annotation, establishing the system testing environment and the communication to the participants and help connect their systems to the testing environment.
no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue
2ec640e6b4f1ebc158d13ee6589778b4c08a04c8
2ec640e6b4f1ebc158d13ee6589778b4c08a04c8_0
Q: How is the data annotated? Text: Introduction Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc. Although the blooming of human-computer dialogue technology in both academia and industry, how to evaluate a dialogue system, especially an open domain chit-chat system, is still an open question. Figure FIGREF6 presents a brief comparison of the open domain chit-chat system and the task-oriented dialogue system. From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue. To promote the development of the evaluation technology for dialogue systems, especially considering the language characteristics of Chinese, we organize the first evaluation of Chinese human-computer dialogue technology. In this paper, we will present the evaluation scheme and the released corpus in detail. The rest of this paper is as follows. In Section 2, we will briefly introduce the first evaluation of Chinese human-computer dialogue technology, which includes the descriptions and the evaluation metrics of the two tasks. We then present the evaluation data and final results in Section 3 and 4 respectively, following the conclusion and acknowledgements in the last two sections. The First Evaluation of Chinese Human-Computer Dialogue Technology The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue. Task 1: User Intent Classification In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information. In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance. It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric. Task 2: Online Testing of Task-oriented Dialogue For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following: “查询明天从哈尔滨到北京的晚间软卧火车票,上下铺均可。 Inquire the soft berth ticket at tomorrow evening, from Harbin to Beijing, either upper or lower berth is okay.” In task 2, there are three categories. They are “air tickets”, “train tickets” and “hotel”. Correspondingly, there are three type of tasks. All the tasks are in the scope of the three categories. However, a complete user intent may include more than one task. For example, a user may first inquiring the air tickets. However, due to the high price, the user decide to buy a train tickets. Furthermore, the user may also need to book a hotel room at the destination. We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following. Task completion ratio: The number of completed tasks divided by the number of total tasks. User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively. Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency. Number of dialogue turns: The number of utterances in a task-completed dialogue. Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide. For the number of dialogue turns, we have a penalty rule that for a dialogue task, if the system cannot return the result (or accomplish the task) in 30 turns, the dialogue task is end by force. Meanwhile, if a system cannot accomplish a task in less than 30 dialogue turns, the number of dialogue turns is set to 30. Evaluation Data In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation. For task 1, as the descriptions in Section SECREF10 , the two top categories are chit-chat (chat in Table TABREF13 ) and task-oriented dialogue. Meanwhile, the task-oriented dialogue also includes 30 sub categories. Actually, the task 1 is a 31 categories classification task. In task 1, besides the data we released for training and developing, we also allow the participants to extend the training and developing corpus. Hence, there are two sub tasks for the task 1. One is closed test, which means the participants can only use the released data for training and developing. The other is open test, which allows the participants to explore external corpus for training and developing. Note that there is a same test set for both the closed test and the open test. For task 2, we release 11 examples of the complete user intent and 3 data file, which includes about one month of flight, hotel and train information, for participants to build their dialogue systems. The current date for online test is set to April 18, 2017. If the tester says “today”, the systems developed by the participants should understand that he/she indicates the date of April 18, 2017. Evaluation Results There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper. Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2. Conclusion In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. In detail, we first present the two tasks of the evaluation as well as the evaluation metrics. We then describe the released data for evaluation. Finally, we also show the evaluation results of the two tasks. As the evaluation data is provided by the iFLYTEK Corporation from their real online applications, we believe that the released data will further promote the research of human-computer dialogue and fill the blank of the data on the two tasks. Acknowledgements We would like to thank the Social Media Processing (SMP) committee of Chinese Information Processing Society of China. We thank all the participants of the first evaluation of Chinese human-computer dialogue technology. We also thank the testers from the voice resource department of the iFLYTEK Corporation for their effort to the online real-time human-computer dialogue test and offline dialogue evaluation. We thank Lingzhi Li, Yangzi Zhang, Jiaqi Zhu and Xiaoming Shi from the research center for social computing and information retrieval for their support on the data annotation, establishing the system testing environment and the communication to the participants and help connect their systems to the testing environment.
Unanswerable
ab0bb4d0a9796416d3d7ceba0ba9ab50c964e9d6
ab0bb4d0a9796416d3d7ceba0ba9ab50c964e9d6_0
Q: What collection steps do they mention? Text: Introduction Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc. Although the blooming of human-computer dialogue technology in both academia and industry, how to evaluate a dialogue system, especially an open domain chit-chat system, is still an open question. Figure FIGREF6 presents a brief comparison of the open domain chit-chat system and the task-oriented dialogue system. From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue. To promote the development of the evaluation technology for dialogue systems, especially considering the language characteristics of Chinese, we organize the first evaluation of Chinese human-computer dialogue technology. In this paper, we will present the evaluation scheme and the released corpus in detail. The rest of this paper is as follows. In Section 2, we will briefly introduce the first evaluation of Chinese human-computer dialogue technology, which includes the descriptions and the evaluation metrics of the two tasks. We then present the evaluation data and final results in Section 3 and 4 respectively, following the conclusion and acknowledgements in the last two sections. The First Evaluation of Chinese Human-Computer Dialogue Technology The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue. Task 1: User Intent Classification In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information. In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance. It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric. Task 2: Online Testing of Task-oriented Dialogue For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following: “查询明天从哈尔滨到北京的晚间软卧火车票,上下铺均可。 Inquire the soft berth ticket at tomorrow evening, from Harbin to Beijing, either upper or lower berth is okay.” In task 2, there are three categories. They are “air tickets”, “train tickets” and “hotel”. Correspondingly, there are three type of tasks. All the tasks are in the scope of the three categories. However, a complete user intent may include more than one task. For example, a user may first inquiring the air tickets. However, due to the high price, the user decide to buy a train tickets. Furthermore, the user may also need to book a hotel room at the destination. We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following. Task completion ratio: The number of completed tasks divided by the number of total tasks. User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively. Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency. Number of dialogue turns: The number of utterances in a task-completed dialogue. Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide. For the number of dialogue turns, we have a penalty rule that for a dialogue task, if the system cannot return the result (or accomplish the task) in 30 turns, the dialogue task is end by force. Meanwhile, if a system cannot accomplish a task in less than 30 dialogue turns, the number of dialogue turns is set to 30. Evaluation Data In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation. For task 1, as the descriptions in Section SECREF10 , the two top categories are chit-chat (chat in Table TABREF13 ) and task-oriented dialogue. Meanwhile, the task-oriented dialogue also includes 30 sub categories. Actually, the task 1 is a 31 categories classification task. In task 1, besides the data we released for training and developing, we also allow the participants to extend the training and developing corpus. Hence, there are two sub tasks for the task 1. One is closed test, which means the participants can only use the released data for training and developing. The other is open test, which allows the participants to explore external corpus for training and developing. Note that there is a same test set for both the closed test and the open test. For task 2, we release 11 examples of the complete user intent and 3 data file, which includes about one month of flight, hotel and train information, for participants to build their dialogue systems. The current date for online test is set to April 18, 2017. If the tester says “today”, the systems developed by the participants should understand that he/she indicates the date of April 18, 2017. Evaluation Results There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper. Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2. Conclusion In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. In detail, we first present the two tasks of the evaluation as well as the evaluation metrics. We then describe the released data for evaluation. Finally, we also show the evaluation results of the two tasks. As the evaluation data is provided by the iFLYTEK Corporation from their real online applications, we believe that the released data will further promote the research of human-computer dialogue and fill the blank of the data on the two tasks. Acknowledgements We would like to thank the Social Media Processing (SMP) committee of Chinese Information Processing Society of China. We thank all the participants of the first evaluation of Chinese human-computer dialogue technology. We also thank the testers from the voice resource department of the iFLYTEK Corporation for their effort to the online real-time human-computer dialogue test and offline dialogue evaluation. We thank Lingzhi Li, Yangzi Zhang, Jiaqi Zhu and Xiaoming Shi from the research center for social computing and information retrieval for their support on the data annotation, establishing the system testing environment and the communication to the participants and help connect their systems to the testing environment.
Unanswerable
0460019eb2186aef835f7852fc445b037bd43bb7
0460019eb2186aef835f7852fc445b037bd43bb7_0
Q: How many intents were classified? Text: Introduction Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc. Although the blooming of human-computer dialogue technology in both academia and industry, how to evaluate a dialogue system, especially an open domain chit-chat system, is still an open question. Figure FIGREF6 presents a brief comparison of the open domain chit-chat system and the task-oriented dialogue system. From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue. To promote the development of the evaluation technology for dialogue systems, especially considering the language characteristics of Chinese, we organize the first evaluation of Chinese human-computer dialogue technology. In this paper, we will present the evaluation scheme and the released corpus in detail. The rest of this paper is as follows. In Section 2, we will briefly introduce the first evaluation of Chinese human-computer dialogue technology, which includes the descriptions and the evaluation metrics of the two tasks. We then present the evaluation data and final results in Section 3 and 4 respectively, following the conclusion and acknowledgements in the last two sections. The First Evaluation of Chinese Human-Computer Dialogue Technology The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue. Task 1: User Intent Classification In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information. In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance. It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric. Task 2: Online Testing of Task-oriented Dialogue For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following: “查询明天从哈尔滨到北京的晚间软卧火车票,上下铺均可。 Inquire the soft berth ticket at tomorrow evening, from Harbin to Beijing, either upper or lower berth is okay.” In task 2, there are three categories. They are “air tickets”, “train tickets” and “hotel”. Correspondingly, there are three type of tasks. All the tasks are in the scope of the three categories. However, a complete user intent may include more than one task. For example, a user may first inquiring the air tickets. However, due to the high price, the user decide to buy a train tickets. Furthermore, the user may also need to book a hotel room at the destination. We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following. Task completion ratio: The number of completed tasks divided by the number of total tasks. User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively. Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency. Number of dialogue turns: The number of utterances in a task-completed dialogue. Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide. For the number of dialogue turns, we have a penalty rule that for a dialogue task, if the system cannot return the result (or accomplish the task) in 30 turns, the dialogue task is end by force. Meanwhile, if a system cannot accomplish a task in less than 30 dialogue turns, the number of dialogue turns is set to 30. Evaluation Data In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation. For task 1, as the descriptions in Section SECREF10 , the two top categories are chit-chat (chat in Table TABREF13 ) and task-oriented dialogue. Meanwhile, the task-oriented dialogue also includes 30 sub categories. Actually, the task 1 is a 31 categories classification task. In task 1, besides the data we released for training and developing, we also allow the participants to extend the training and developing corpus. Hence, there are two sub tasks for the task 1. One is closed test, which means the participants can only use the released data for training and developing. The other is open test, which allows the participants to explore external corpus for training and developing. Note that there is a same test set for both the closed test and the open test. For task 2, we release 11 examples of the complete user intent and 3 data file, which includes about one month of flight, hotel and train information, for participants to build their dialogue systems. The current date for online test is set to April 18, 2017. If the tester says “today”, the systems developed by the participants should understand that he/she indicates the date of April 18, 2017. Evaluation Results There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper. Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2. Conclusion In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. In detail, we first present the two tasks of the evaluation as well as the evaluation metrics. We then describe the released data for evaluation. Finally, we also show the evaluation results of the two tasks. As the evaluation data is provided by the iFLYTEK Corporation from their real online applications, we believe that the released data will further promote the research of human-computer dialogue and fill the blank of the data on the two tasks. Acknowledgements We would like to thank the Social Media Processing (SMP) committee of Chinese Information Processing Society of China. We thank all the participants of the first evaluation of Chinese human-computer dialogue technology. We also thank the testers from the voice resource department of the iFLYTEK Corporation for their effort to the online real-time human-computer dialogue test and offline dialogue evaluation. We thank Lingzhi Li, Yangzi Zhang, Jiaqi Zhu and Xiaoming Shi from the research center for social computing and information retrieval for their support on the data annotation, establishing the system testing environment and the communication to the participants and help connect their systems to the testing environment.
two
96c09ece36a992762860cde4c110f1653c110d96
96c09ece36a992762860cde4c110f1653c110d96_0
Q: What was the result of the highest performing system? Text: Introduction Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc. Although the blooming of human-computer dialogue technology in both academia and industry, how to evaluate a dialogue system, especially an open domain chit-chat system, is still an open question. Figure FIGREF6 presents a brief comparison of the open domain chit-chat system and the task-oriented dialogue system. From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue. To promote the development of the evaluation technology for dialogue systems, especially considering the language characteristics of Chinese, we organize the first evaluation of Chinese human-computer dialogue technology. In this paper, we will present the evaluation scheme and the released corpus in detail. The rest of this paper is as follows. In Section 2, we will briefly introduce the first evaluation of Chinese human-computer dialogue technology, which includes the descriptions and the evaluation metrics of the two tasks. We then present the evaluation data and final results in Section 3 and 4 respectively, following the conclusion and acknowledgements in the last two sections. The First Evaluation of Chinese Human-Computer Dialogue Technology The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue. Task 1: User Intent Classification In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information. In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance. It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric. Task 2: Online Testing of Task-oriented Dialogue For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following: “查询明天从哈尔滨到北京的晚间软卧火车票,上下铺均可。 Inquire the soft berth ticket at tomorrow evening, from Harbin to Beijing, either upper or lower berth is okay.” In task 2, there are three categories. They are “air tickets”, “train tickets” and “hotel”. Correspondingly, there are three type of tasks. All the tasks are in the scope of the three categories. However, a complete user intent may include more than one task. For example, a user may first inquiring the air tickets. However, due to the high price, the user decide to buy a train tickets. Furthermore, the user may also need to book a hotel room at the destination. We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following. Task completion ratio: The number of completed tasks divided by the number of total tasks. User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively. Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency. Number of dialogue turns: The number of utterances in a task-completed dialogue. Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide. For the number of dialogue turns, we have a penalty rule that for a dialogue task, if the system cannot return the result (or accomplish the task) in 30 turns, the dialogue task is end by force. Meanwhile, if a system cannot accomplish a task in less than 30 dialogue turns, the number of dialogue turns is set to 30. Evaluation Data In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation. For task 1, as the descriptions in Section SECREF10 , the two top categories are chit-chat (chat in Table TABREF13 ) and task-oriented dialogue. Meanwhile, the task-oriented dialogue also includes 30 sub categories. Actually, the task 1 is a 31 categories classification task. In task 1, besides the data we released for training and developing, we also allow the participants to extend the training and developing corpus. Hence, there are two sub tasks for the task 1. One is closed test, which means the participants can only use the released data for training and developing. The other is open test, which allows the participants to explore external corpus for training and developing. Note that there is a same test set for both the closed test and the open test. For task 2, we release 11 examples of the complete user intent and 3 data file, which includes about one month of flight, hotel and train information, for participants to build their dialogue systems. The current date for online test is set to April 18, 2017. If the tester says “today”, the systems developed by the participants should understand that he/she indicates the date of April 18, 2017. Evaluation Results There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper. Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2. Conclusion In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. In detail, we first present the two tasks of the evaluation as well as the evaluation metrics. We then describe the released data for evaluation. Finally, we also show the evaluation results of the two tasks. As the evaluation data is provided by the iFLYTEK Corporation from their real online applications, we believe that the released data will further promote the research of human-computer dialogue and fill the blank of the data on the two tasks. Acknowledgements We would like to thank the Social Media Processing (SMP) committee of Chinese Information Processing Society of China. We thank all the participants of the first evaluation of Chinese human-computer dialogue technology. We also thank the testers from the voice resource department of the iFLYTEK Corporation for their effort to the online real-time human-computer dialogue test and offline dialogue evaluation. We thank Lingzhi Li, Yangzi Zhang, Jiaqi Zhu and Xiaoming Shi from the research center for social computing and information retrieval for their support on the data annotation, establishing the system testing environment and the communication to the participants and help connect their systems to the testing environment.
For task 1 best F1 score was 0.9391 on closed and 0.9414 on open test. For task2 best result had: Ratio 0.3175 , Satisfaction 64.53, Fluency 0, Turns -1 and Guide 2
a9cc4b17063711c8606b8fc1c5eaf057b317a0c9
a9cc4b17063711c8606b8fc1c5eaf057b317a0c9_0
Q: What metrics are used in the evaluation? Text: Introduction Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc. Although the blooming of human-computer dialogue technology in both academia and industry, how to evaluate a dialogue system, especially an open domain chit-chat system, is still an open question. Figure FIGREF6 presents a brief comparison of the open domain chit-chat system and the task-oriented dialogue system. From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue. To promote the development of the evaluation technology for dialogue systems, especially considering the language characteristics of Chinese, we organize the first evaluation of Chinese human-computer dialogue technology. In this paper, we will present the evaluation scheme and the released corpus in detail. The rest of this paper is as follows. In Section 2, we will briefly introduce the first evaluation of Chinese human-computer dialogue technology, which includes the descriptions and the evaluation metrics of the two tasks. We then present the evaluation data and final results in Section 3 and 4 respectively, following the conclusion and acknowledgements in the last two sections. The First Evaluation of Chinese Human-Computer Dialogue Technology The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue. Task 1: User Intent Classification In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information. In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance. It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric. Task 2: Online Testing of Task-oriented Dialogue For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following: “查询明天从哈尔滨到北京的晚间软卧火车票,上下铺均可。 Inquire the soft berth ticket at tomorrow evening, from Harbin to Beijing, either upper or lower berth is okay.” In task 2, there are three categories. They are “air tickets”, “train tickets” and “hotel”. Correspondingly, there are three type of tasks. All the tasks are in the scope of the three categories. However, a complete user intent may include more than one task. For example, a user may first inquiring the air tickets. However, due to the high price, the user decide to buy a train tickets. Furthermore, the user may also need to book a hotel room at the destination. We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following. Task completion ratio: The number of completed tasks divided by the number of total tasks. User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively. Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency. Number of dialogue turns: The number of utterances in a task-completed dialogue. Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide. For the number of dialogue turns, we have a penalty rule that for a dialogue task, if the system cannot return the result (or accomplish the task) in 30 turns, the dialogue task is end by force. Meanwhile, if a system cannot accomplish a task in less than 30 dialogue turns, the number of dialogue turns is set to 30. Evaluation Data In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation. For task 1, as the descriptions in Section SECREF10 , the two top categories are chit-chat (chat in Table TABREF13 ) and task-oriented dialogue. Meanwhile, the task-oriented dialogue also includes 30 sub categories. Actually, the task 1 is a 31 categories classification task. In task 1, besides the data we released for training and developing, we also allow the participants to extend the training and developing corpus. Hence, there are two sub tasks for the task 1. One is closed test, which means the participants can only use the released data for training and developing. The other is open test, which allows the participants to explore external corpus for training and developing. Note that there is a same test set for both the closed test and the open test. For task 2, we release 11 examples of the complete user intent and 3 data file, which includes about one month of flight, hotel and train information, for participants to build their dialogue systems. The current date for online test is set to April 18, 2017. If the tester says “today”, the systems developed by the participants should understand that he/she indicates the date of April 18, 2017. Evaluation Results There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper. Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2. Conclusion In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. In detail, we first present the two tasks of the evaluation as well as the evaluation metrics. We then describe the released data for evaluation. Finally, we also show the evaluation results of the two tasks. As the evaluation data is provided by the iFLYTEK Corporation from their real online applications, we believe that the released data will further promote the research of human-computer dialogue and fill the blank of the data on the two tasks. Acknowledgements We would like to thank the Social Media Processing (SMP) committee of Chinese Information Processing Society of China. We thank all the participants of the first evaluation of Chinese human-computer dialogue technology. We also thank the testers from the voice resource department of the iFLYTEK Corporation for their effort to the online real-time human-computer dialogue test and offline dialogue evaluation. We thank Lingzhi Li, Yangzi Zhang, Jiaqi Zhu and Xiaoming Shi from the research center for social computing and information retrieval for their support on the data annotation, establishing the system testing environment and the communication to the participants and help connect their systems to the testing environment.
For task 1, we use F1-score, Task completion ratio, User satisfaction degree, Response fluency, Number of dialogue turns, Guidance ability for out of scope input
6ead576ee5813164684a8cdda36e6a8c180455d9
6ead576ee5813164684a8cdda36e6a8c180455d9_0
Q: How do they measure the quality of summaries? Text: Introduction Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 . The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Siri and Alexa. However, in comparison with answer span extraction, the natural language generation (NLG) ability for RC has been less studied. While datasets such as MS MARCO BIBREF5 have been proposed for providing abstractive answers in natural language, the state-of-the-art methods BIBREF6 , BIBREF7 are based on answer span extraction, even for the datasets. Generative models such as S-Net BIBREF8 suffer from a dearth of training data to cover open-domain questions. Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as concise phrases that do not contain the context of the question and well-formed sentences that make sense even without the context of the question. These capabilities complement each other; however, the methods used in previous studies cannot utilize and control different answer styles within a model. In this study, we propose a generative model, called Masque, for multi-passage RC. On the MS MARCO 2.1 dataset, Masque achieves state-of-the-art performance on the dataset's two tasks, Q&A and NLG, with different answer styles. The main contributions of this study are that our model enables the following two abilities. Problem Formulation The task considered in this paper, is defined as: Problem 1 Given a question with $J$ words $x^q = \lbrace x^q_1, \ldots , x^q_J\rbrace $ , a set of $K$ passages, where each $k$ -th passage is composed of $L$ words $x^{p_k} = \lbrace x^{p_k}_1, \ldots , x^{p_k}_{L}\rbrace $ , and an answer style $s$ , an RC system outputs an answer $y = \lbrace y_1, \ldots , y_T \rbrace $ conditioned on the style. In short, for inference, given a set of 3-tuples $(x^q, \lbrace x^{p_k}\rbrace , s)$ , the system predicts $P(y)$ . The training data is a set of 6-tuples: $(x^q, \lbrace x^{p_k}\rbrace , s, y, a, \lbrace r^{p_k}\rbrace )$ , where $a$ is 1 if the question is answerable with the provided passages and 0 otherwise, and $r^{p_k}$ is 1 if the $k$ -th passage is required to formulate the answer and 0 otherwise. Proposed Model Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style. Masque directly models the conditional probability $p(y|x^q, \lbrace x^{p_k}\rbrace , s)$ . In addition to multi-style learning, it considers passage ranking and answer possibility classification together as multi-task learning in order to improve accuracy. Figure 2 shows the model architecture. It consists of the following modules. 1 The question-passages reader (§ "Question-Passages Reader" ) models interactions between the question and passages. 2 The passage ranker (§ "Passage Ranker" ) finds relevant passages to the question. 3 The answer possibility classifier (§ "Answer Possibility Classifier" ) identifies answerable questions. 4 The answer sentence decoder (§ "Answer Sentence Decoder" ) outputs a sequence of words conditioned on the style. Question-Passages Reader Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured. Let $x^q$ and $x^{p_k}$ represent one-hot vectors of words in the question and $k$ -th passage. First, this layer projects each of the one-hot vectors (of size $V$ ) into a $d_\mathrm {word}$ -dimensional continuous vector space with a pre-trained weight matrix $W^e \in \mathbb {R}^{d_\mathrm {word} \times V}$ such as GloVe BIBREF15 . Next, it uses contextualized word representations, ELMo BIBREF16 , which is a character-level two-layer bidirectional language model pre-trained on a large-scale corpus. ELMo representations allow our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and contextualized embedding vectors is passed to a two-layer highway network BIBREF17 that is shared for the question and passages. This layer uses a stack of Transformer blocks, which are shared for the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a $d$ -dimensional vector by a linear transformation. The outputs of this layer are sequences of $d$ -dimensional vectors: $E^{p_k} \in \mathbb {R}^{d \times L}$ for the $k$ -th passage and $E^q \in \mathbb {R}^{d \times J}$ for the question. It consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism defined in BIBREF12 . The feed-forward network consists of two linear transformations with a GELU BIBREF18 activation in between, following OpenAI GPT BIBREF19 . Each sub-layer is placed inside a residual block BIBREF20 . For an input $x$ and a given sub-layer function $f$ , the output is $\mathrm {LayerNorm}(f(x)+x)$ , where $\mathrm {LayerNorm}$ indicates the layer normalization proposed in BIBREF21 . To facilitate these residual connections, all sub-layers produce outputs of dimension $d$ . Note that our model does not use any position embeddings because ELMo gives the positional information of the words in each sequence. This layer fuses information from the passages to the question as well as from the question to the passages in a dual mechanism. It first computes a similarity matrix $U^{p_k} \in \mathbb {R}^{L{\times }J}$ between the question and $k$ -th passage, as is done in BIBREF22 , where $$U^{p_k}_{lj} = {w^a}^\top [ E^{p_k}_l; E^q_j; E^{p_k}_l \odot E^q_j ]$$ (Eq. 15) indicates the similarity between the $l$ -th word of the $k$ -th passage and the $j$ -th question word. $w^a \in \mathbb {R}^{3d}$ are learnable parameters. The $\odot $ operator denotes the Hadamard product, and the $[;]$ operator means vector concatenation across the rows. Next, it obtains the row and column normalized similarity matrices $A^{p_k} = \mathrm {softmax}_j({U^{p_k}}^\top ) \in \mathbb {R}^{J\times L}$ and $B^{p_k} = \mathrm {softmax}_{l}(U^{p_k}) \in \mathbb {R}^{L \times J}$ . We use DCN BIBREF23 as the dual attention mechanism to obtain question-to-passage representations $G^{q \rightarrow p_k} \in \mathbb {R}^{5d \times L}$ : $$\nonumber [E^{p_k}; \bar{A}^{p_k}; \bar{\bar{A}}^{p_k}; E^{p_k} \odot \bar{A}^{p_k}; E^{p_k} \odot \bar{\bar{A}}^{p_k}]$$ (Eq. 16) and passage-to-question ones $G^{p \rightarrow q} \in \mathbb {R}^{5d \times J}$ : $$\begin{split} \nonumber & [ E^{q} ; \max _k(\bar{B}^{p_k}); \max _k(\bar{\bar{B}}^{p_k}); \\ &\hspace{10.0pt} E^{q} \odot \max _k(\bar{B}^{p_k}); E^{q} \odot \max _k(\bar{\bar{B}}^{p_k}) ] \mathrm {\ \ where} \end{split}\\ \nonumber &\bar{A}^{p_k} = E^q A^{p_k}\in \mathbb {R}^{d \times L}, \ \bar{B}^{p_k} = E^{p_k} B^{p_k} \in \mathbb {R}^{d \times J} \\ \nonumber &\bar{\bar{A}}^{p_k} = \bar{B}^{p_k} A^{p_k} \in \mathbb {R}^{d \times L}, \ \bar{\bar{B}}^{p_k} = \bar{A}^{p_k} B^{p_k} \in \mathbb {R}^{d \times J}.$$ (Eq. 17) This layer uses a stack of Transformer encoder blocks for question representations and obtains $M^q \in \mathbb {R}^{d \times J}$ from $G^{p \rightarrow q}$ . It also uses an another stack for passage representations and obtains $M^{p_k} \in \mathbb {R}^{d \times L}$ from $G^{q \rightarrow p_k}$ for each $k$ -th passage. The outputs of this layer, $M^q$ and $\lbrace M^{p_k}\rbrace $ , are passed on to the answer sentence decoder; the $\lbrace M^{p_k}\rbrace $ are also passed on to the passage ranker and answer possibility classifier. Passage Ranker The passage ranker maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: $$\beta ^{p_k} = \mathrm {sigmoid}({w^r}^\top M^{p_k}_1),$$ (Eq. 20) where $w^r \in \mathbb {R}^{d}$ are learnable parameters. Answer Possibility Classifier The answer possibility classifier maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: $$P(a) = \mathrm {sigmoid}({w^c}^\top [M^{p_1}_1; \ldots ; M^{p_K}_1]),$$ (Eq. 22) where $w^c \in \mathbb {R}^{Kd}$ are learnable parameters. Answer Sentence Decoder Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step. Let $y = \lbrace y_1, \ldots , y_{T}\rbrace $ represent one-hot vectors of words in the answer. This layer has the same components as the word embedding layer of the question-passages reader, except that it uses a unidirectional ELMo in order to ensure that the predictions for position $t$ depend only on the known outputs at positions less than $t$ . Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style. This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a $d$ -dimensional vector by a linear transformation, and the output of this layer is a sequence of $d$ -dimensional vectors: $\lbrace s_1, \ldots , s_T\rbrace $ . In addition to the encoder block, this block consists of second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2 . As in BIBREF12 , the self-attention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over $M^q$ and $M^{p_\mathrm {all}}$ , respectively. The $M^{p_\mathrm {all}}$ is the concatenated outputs of the encoder stack for the passages, $$M^{p_\mathrm {all}} = [M^{p_1}, \ldots , M^{p_K}] \in \mathbb {R}^{d \times KL}.$$ (Eq. 27) The $[,]$ operator means vector concatenation across the columns. This attention for the concatenated passages enables our model to produce attention weights that are comparable between passages. Our extended mechanism allows both words to be generated from a fixed vocabulary and words to be copied from both the question and multiple passages. Figure 3 shows the overview. Let the extended vocabulary, $V_\mathrm {ext}$ , be the union of the common words (a small subset of the full vocabulary, $V$ , defined by the reader-side word embedding matrix) and all words appearing in the input question and passages. $P^v$ denotes the probability distribution of the $t$ -th answer word, $y_t$ , over the extended vocabulary. It is defined as: $$P^v(y_t) =\mathrm {softmax}({W^2}^\top (W^1 s_t + b^1)),$$ (Eq. 31) where the output embedding $W^2 \in \mathbb {R}^{d_\mathrm {word} \times V_\mathrm {ext}}$ is tied with the corresponding part of the input embedding BIBREF25 , and $W^1 \in \mathbb {R}^{d_\mathrm {word} \times d}$ and $b^1 \in \mathbb {R}^{d_\mathrm {word}}$ are learnable parameters. $P^v(y_t)$ is zero if $y_t$ is an out-of-vocabulary word for $V$ . The copy mechanism used in the original pointer-generator is based on the attention weights of a single-layer attentional RNN decoder BIBREF9 . The attention weights in our decoder stack are the intermediate outputs in multi-head attentions and are not suitable for the copy mechanism. Therefore, our model also uses additive attentions for the question and multiple passages on top of the decoder stack. The layer takes $s_t$ as the query and outputs $\alpha ^q_t \in \mathbb {R}^J$ ( $\alpha ^p_t \in \mathbb {R}^{KL}$ ) as the attention weights and $c^q_t \in \mathbb {R}^d$ ( $c^p_t \in \mathbb {R}^d$ ) as the context vectors for the question (passages): $$e^q_j &= {w^q}^\top \tanh (W^{qm} M_j^q + W^{qs} s_t +b^q), \\ \alpha ^q_t &= \mathrm {softmax}(e^q), \\ c^q_t &= \textstyle \sum _j \alpha ^q_{tj} M_j^q, \\ e^{p_k}_l &= {w^p}^\top \tanh (W^{pm} M_l^{p_k} + W^{ps} s_t +b^p), \\ \alpha ^p_t &= \mathrm {softmax}([e^{p_1}; \ldots ; e^{p_K}]), \\ c^p_t &= \textstyle \sum _{l} \alpha ^p_{tl} M^{p_\mathrm {all}}_{l},$$ (Eq. 33) where $w^q$ , $w^p \in \mathbb {R}^d$ , $W^{qm}$ , $W^{qs}$ , $W^{pm}$ , $W^{ps} \in \mathbb {R}^{d \times d}$ , and $b^q$ , $b^p \in \mathbb {R}^d$ are learnable parameters. $P^q$ and $P^p$ are the copy distributions over the extended vocabulary, defined as: $$P^q(y_t) &= \textstyle \sum _{j: x^q_j = y_t} \alpha ^q_{tj}, \\ P^p(y_t) &= \textstyle \sum _{l: x^{p_{k(l)}}_{l} = y_t} \alpha ^p_{tl},$$ (Eq. 34) where $k(l)$ means the passage index corresponding to the $l$ -th word in the concatenated passages. The final distribution of the $t$ -th answer word, $y_t$ , is defined as a mixture of the three distributions: $$P(y_t) = \lambda ^v P^v(y_t) + \lambda ^q P^q(y_t) + \lambda ^p P^p(y_t),$$ (Eq. 36) where the mixture weights are given by $$\lambda ^v, \lambda ^q, \lambda ^p = \mathrm {softmax}(W^m [s_t; c^q_t; c^p_t] + b^m).$$ (Eq. 37) $W^m \in \mathbb {R}^{3 \times 3d}$ , $b^m \in \mathbb {R}^3$ are learnable parameters. In order not to use words in irrelevant passages, our model introduces the concept of combined attention BIBREF26 . While the original technique combines the word and sentence level attentions, our model combines the passage-level relevance $\beta ^{p_k}$ and word-level attentions $\alpha ^p_t$ by using simple scalar multiplication and re-normalization. The updated word attention is: $$\alpha ^p_{tl} & := \frac{\alpha ^p_{tl} \beta ^{p_{k(l)} }}{\sum _{l^{\prime }} \alpha ^p_{tl^{\prime }} \beta ^{p_{k(l^{\prime })}}}.$$ (Eq. 39) Loss Function We define the training loss as the sum of losses in $$L(\theta ) = L_\mathrm {dec} + \gamma _\mathrm {rank} L_\mathrm {rank} + \gamma _\mathrm {cls} L_\mathrm {cls}$$ (Eq. 41) where $\theta $ is the set of all learnable parameters, and $\gamma _\mathrm {rank}$ and $\gamma _\mathrm {cls}$ are balancing parameters. The loss of the decoder, $L_\mathrm {dec}$ , is the negative log likelihood of the whole target answer sentence averaged over $N_\mathrm {able}$ answerable examples: $$L_\mathrm {dec} = - \frac{1}{N_\mathrm {able}}\sum _{(a,y)\in \mathcal {D}} \frac{a}{T} \sum _t \log P(y_{t}),$$ (Eq. 42) where $\mathcal {D}$ is the training dataset. The losses of the passage ranker, $L_\mathrm {rank}$ , and the answer possibility classifier, $L_\mathrm {cls}$ , are the binary cross entropy between the true and predicted values averaged over all $N$ examples: $$L_\mathrm {rank} = - \frac{1}{NK} \sum _k \sum _{r^{p_k}\in \mathcal {D}} \biggl ( \begin{split} &r^{p_k} \log \beta ^{p_k} + \\ &(1-r^{p_k}) \log (1-\beta ^{p_k}) \end{split} \biggr ),\\ L_\mathrm {cls} = - \frac{1}{N} \sum _{a \in \mathcal {D}} \biggl ( \begin{split} &a \log P(a) + \\ &(1-a) \log (1-P(a)) \end{split} \biggr ).$$ (Eq. 43) Setup We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\subset $ ANS $\subset $ ALL. We trained our model on a machine with eight NVIDIA P100 GPUs. Our model was jointly trained with the two answer styles in the ALL set for a total of eight epochs with a batch size of 80. The training took roughly six days. The ensemble model consists of six training runs with the identical architecture and hyperparameters. The hidden size $d$ was 304, and the number of attention heads was 8. The inner state size of the feed-forward networks was 256. The numbers of shared encoding blocks, modeling blocks for question, modeling blocks for passages, and decoder blocks were 3, 2, 5, and 8, respectively. We used the pre-trained uncased 300-dimensional GloVe BIBREF15 and the original 512-dimensional ELMo BIBREF16 . We used the spaCy tokenizer, and all words were lowercased except the input for ELMo. The number of common words in $V_\mathrm {ext}$ was 5,000. We used the Adam optimization BIBREF27 with $\beta _1 = 0.9$ , $\beta _2 = 0.999$ , and $\epsilon = 10^{-8}$ . Weights were initialized using $N(0, 0.02)$ , except that the biases of all the linear transformations were initialized with zero vectors. The learning rate was increased linearly from zero to $2.5 \times 10^{-4}$ in the first 2,000 steps and annealed to 0 using a cosine schedule. All parameter gradients were clipped to a maximum norm of 1. An exponential moving average was applied to all trainable variables with a decay rate 0.9995. The balancing factors of joint learning, $\lambda _\mathrm {rank}$ and $\lambda _\mathrm {cls}$ , were set to 0.5 and 0.1. We used a modified version of the L $_2$ regularization proposed in BIBREF28 , with $w = 0.01$ . We additionally used a dropout BIBREF29 rate of 0.3 for all highway networks and residual and scaled dot-product attention operations in the multi-head attention mechanism. We also used one-sided label smoothing BIBREF30 for the passage relevance and answer possibility labels. We smoothed only the positive labels to 0.9. Results Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 . Table 3 shows the results of the ablation test for our model (controlled with the NLG style) on the well-formed answers of the WFA dev. set. Our model, which was trained with the ALL set consisting of the two styles, outperformed the model trained with the WFA set consisting of the single style. Multi-style learning allowed our model to improve NLG performance by also using non-sentence answers. Table 3 shows that our model outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN BIBREF11 . Our deep Transformer decoder captured the interaction among the question, the passages, and the answer better than a single-layer LSTM decoder. Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. The joint learning has a regularization effect on the question-passages reader. We also confirmed that the gold passage ranker, which can predict passage relevances perfectly, improves RC performance significantly. Passage re-ranking will be a key to developing a system that can outperform humans. Table 4 shows the passage re-ranking performance for the ten given passages on the ANS dev. set. Our ranker improved the initial ranking provided by Bing by a significant margin. Also, the ranker shares the question-passages reader with the answer decoder, and this sharing contributed to the improvements over the ranker trained without the answer decoder. This result is similar to those reported in BIBREF33 . Moreover, the joint learning with the answer possibility classifier and multiple answer styles, which enables our model to learn from a larger number of data, improved the re-ranking. Figure 4 shows the precision-recall curve of answer possibility classification on the ALL dev. set, where the positive class is the answerable data. Our model identified the answerable questions well. The maximum $F_1$ score was 0.7893. This is the first report on answer possibility classification with MS MARCO 2.1. Figure 5 shows the lengths of the answers generated by our model, which are broken down by answer style and query type. The generated answers were relatively shorter than the reference answers but well controlled with the target style in every query type. Also, we should note that our model does not guarantee the consistency in terms of meaning across the answer styles. We randomly selected 100 questions and compared the answers our model generated with the NLG and Q&A styles. The consistency ratio was 0.81, where major errors were due to copying words from different parts of the passages and generating different words, especially yes/no, from a fixed vocabulary. Appendix "Reading Comprehension Examples generated by Masque from MS MARCO 2.1" shows examples of generated answers. We found (d) style errors; (e) yes/no classification errors; (f) copy errors with respect to numerical values; and (c,e) grammatical errors that were originally contained in the inputs. Conclusion We believe our study makes two contributions to the study of multi-passage RC with NLG. Our model enables 1) multi-source abstractive summarization based RC and 2) style-controllable RC. The key strength of our model is its high accuracy of generating abstractive summaries from the question and passages; our model achieved state-of-the-art performance in terms of Rouge-L on the Q&A and NLG tasks of MS MARCO 2.1 that have different answer styles BIBREF5 . The styles considered in this paper are only related to the context of the question in the answer sentence; our model will be promising for controlling other styles such as length and speaking styles. Future work will involve exploring the potential of hybrid models combining extractive and abstractive approaches and improving the passage re-ranking and answerable question identification.
Rouge-L, Bleu-1
0117aa1266a37b0d2ef429f1b0653b9dde3677fe
0117aa1266a37b0d2ef429f1b0653b9dde3677fe_0
Q: Does their model also take the expected answer style as input? Text: Introduction Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 . The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Siri and Alexa. However, in comparison with answer span extraction, the natural language generation (NLG) ability for RC has been less studied. While datasets such as MS MARCO BIBREF5 have been proposed for providing abstractive answers in natural language, the state-of-the-art methods BIBREF6 , BIBREF7 are based on answer span extraction, even for the datasets. Generative models such as S-Net BIBREF8 suffer from a dearth of training data to cover open-domain questions. Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as concise phrases that do not contain the context of the question and well-formed sentences that make sense even without the context of the question. These capabilities complement each other; however, the methods used in previous studies cannot utilize and control different answer styles within a model. In this study, we propose a generative model, called Masque, for multi-passage RC. On the MS MARCO 2.1 dataset, Masque achieves state-of-the-art performance on the dataset's two tasks, Q&A and NLG, with different answer styles. The main contributions of this study are that our model enables the following two abilities. Problem Formulation The task considered in this paper, is defined as: Problem 1 Given a question with $J$ words $x^q = \lbrace x^q_1, \ldots , x^q_J\rbrace $ , a set of $K$ passages, where each $k$ -th passage is composed of $L$ words $x^{p_k} = \lbrace x^{p_k}_1, \ldots , x^{p_k}_{L}\rbrace $ , and an answer style $s$ , an RC system outputs an answer $y = \lbrace y_1, \ldots , y_T \rbrace $ conditioned on the style. In short, for inference, given a set of 3-tuples $(x^q, \lbrace x^{p_k}\rbrace , s)$ , the system predicts $P(y)$ . The training data is a set of 6-tuples: $(x^q, \lbrace x^{p_k}\rbrace , s, y, a, \lbrace r^{p_k}\rbrace )$ , where $a$ is 1 if the question is answerable with the provided passages and 0 otherwise, and $r^{p_k}$ is 1 if the $k$ -th passage is required to formulate the answer and 0 otherwise. Proposed Model Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style. Masque directly models the conditional probability $p(y|x^q, \lbrace x^{p_k}\rbrace , s)$ . In addition to multi-style learning, it considers passage ranking and answer possibility classification together as multi-task learning in order to improve accuracy. Figure 2 shows the model architecture. It consists of the following modules. 1 The question-passages reader (§ "Question-Passages Reader" ) models interactions between the question and passages. 2 The passage ranker (§ "Passage Ranker" ) finds relevant passages to the question. 3 The answer possibility classifier (§ "Answer Possibility Classifier" ) identifies answerable questions. 4 The answer sentence decoder (§ "Answer Sentence Decoder" ) outputs a sequence of words conditioned on the style. Question-Passages Reader Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured. Let $x^q$ and $x^{p_k}$ represent one-hot vectors of words in the question and $k$ -th passage. First, this layer projects each of the one-hot vectors (of size $V$ ) into a $d_\mathrm {word}$ -dimensional continuous vector space with a pre-trained weight matrix $W^e \in \mathbb {R}^{d_\mathrm {word} \times V}$ such as GloVe BIBREF15 . Next, it uses contextualized word representations, ELMo BIBREF16 , which is a character-level two-layer bidirectional language model pre-trained on a large-scale corpus. ELMo representations allow our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and contextualized embedding vectors is passed to a two-layer highway network BIBREF17 that is shared for the question and passages. This layer uses a stack of Transformer blocks, which are shared for the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a $d$ -dimensional vector by a linear transformation. The outputs of this layer are sequences of $d$ -dimensional vectors: $E^{p_k} \in \mathbb {R}^{d \times L}$ for the $k$ -th passage and $E^q \in \mathbb {R}^{d \times J}$ for the question. It consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism defined in BIBREF12 . The feed-forward network consists of two linear transformations with a GELU BIBREF18 activation in between, following OpenAI GPT BIBREF19 . Each sub-layer is placed inside a residual block BIBREF20 . For an input $x$ and a given sub-layer function $f$ , the output is $\mathrm {LayerNorm}(f(x)+x)$ , where $\mathrm {LayerNorm}$ indicates the layer normalization proposed in BIBREF21 . To facilitate these residual connections, all sub-layers produce outputs of dimension $d$ . Note that our model does not use any position embeddings because ELMo gives the positional information of the words in each sequence. This layer fuses information from the passages to the question as well as from the question to the passages in a dual mechanism. It first computes a similarity matrix $U^{p_k} \in \mathbb {R}^{L{\times }J}$ between the question and $k$ -th passage, as is done in BIBREF22 , where $$U^{p_k}_{lj} = {w^a}^\top [ E^{p_k}_l; E^q_j; E^{p_k}_l \odot E^q_j ]$$ (Eq. 15) indicates the similarity between the $l$ -th word of the $k$ -th passage and the $j$ -th question word. $w^a \in \mathbb {R}^{3d}$ are learnable parameters. The $\odot $ operator denotes the Hadamard product, and the $[;]$ operator means vector concatenation across the rows. Next, it obtains the row and column normalized similarity matrices $A^{p_k} = \mathrm {softmax}_j({U^{p_k}}^\top ) \in \mathbb {R}^{J\times L}$ and $B^{p_k} = \mathrm {softmax}_{l}(U^{p_k}) \in \mathbb {R}^{L \times J}$ . We use DCN BIBREF23 as the dual attention mechanism to obtain question-to-passage representations $G^{q \rightarrow p_k} \in \mathbb {R}^{5d \times L}$ : $$\nonumber [E^{p_k}; \bar{A}^{p_k}; \bar{\bar{A}}^{p_k}; E^{p_k} \odot \bar{A}^{p_k}; E^{p_k} \odot \bar{\bar{A}}^{p_k}]$$ (Eq. 16) and passage-to-question ones $G^{p \rightarrow q} \in \mathbb {R}^{5d \times J}$ : $$\begin{split} \nonumber & [ E^{q} ; \max _k(\bar{B}^{p_k}); \max _k(\bar{\bar{B}}^{p_k}); \\ &\hspace{10.0pt} E^{q} \odot \max _k(\bar{B}^{p_k}); E^{q} \odot \max _k(\bar{\bar{B}}^{p_k}) ] \mathrm {\ \ where} \end{split}\\ \nonumber &\bar{A}^{p_k} = E^q A^{p_k}\in \mathbb {R}^{d \times L}, \ \bar{B}^{p_k} = E^{p_k} B^{p_k} \in \mathbb {R}^{d \times J} \\ \nonumber &\bar{\bar{A}}^{p_k} = \bar{B}^{p_k} A^{p_k} \in \mathbb {R}^{d \times L}, \ \bar{\bar{B}}^{p_k} = \bar{A}^{p_k} B^{p_k} \in \mathbb {R}^{d \times J}.$$ (Eq. 17) This layer uses a stack of Transformer encoder blocks for question representations and obtains $M^q \in \mathbb {R}^{d \times J}$ from $G^{p \rightarrow q}$ . It also uses an another stack for passage representations and obtains $M^{p_k} \in \mathbb {R}^{d \times L}$ from $G^{q \rightarrow p_k}$ for each $k$ -th passage. The outputs of this layer, $M^q$ and $\lbrace M^{p_k}\rbrace $ , are passed on to the answer sentence decoder; the $\lbrace M^{p_k}\rbrace $ are also passed on to the passage ranker and answer possibility classifier. Passage Ranker The passage ranker maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: $$\beta ^{p_k} = \mathrm {sigmoid}({w^r}^\top M^{p_k}_1),$$ (Eq. 20) where $w^r \in \mathbb {R}^{d}$ are learnable parameters. Answer Possibility Classifier The answer possibility classifier maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: $$P(a) = \mathrm {sigmoid}({w^c}^\top [M^{p_1}_1; \ldots ; M^{p_K}_1]),$$ (Eq. 22) where $w^c \in \mathbb {R}^{Kd}$ are learnable parameters. Answer Sentence Decoder Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step. Let $y = \lbrace y_1, \ldots , y_{T}\rbrace $ represent one-hot vectors of words in the answer. This layer has the same components as the word embedding layer of the question-passages reader, except that it uses a unidirectional ELMo in order to ensure that the predictions for position $t$ depend only on the known outputs at positions less than $t$ . Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style. This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a $d$ -dimensional vector by a linear transformation, and the output of this layer is a sequence of $d$ -dimensional vectors: $\lbrace s_1, \ldots , s_T\rbrace $ . In addition to the encoder block, this block consists of second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2 . As in BIBREF12 , the self-attention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over $M^q$ and $M^{p_\mathrm {all}}$ , respectively. The $M^{p_\mathrm {all}}$ is the concatenated outputs of the encoder stack for the passages, $$M^{p_\mathrm {all}} = [M^{p_1}, \ldots , M^{p_K}] \in \mathbb {R}^{d \times KL}.$$ (Eq. 27) The $[,]$ operator means vector concatenation across the columns. This attention for the concatenated passages enables our model to produce attention weights that are comparable between passages. Our extended mechanism allows both words to be generated from a fixed vocabulary and words to be copied from both the question and multiple passages. Figure 3 shows the overview. Let the extended vocabulary, $V_\mathrm {ext}$ , be the union of the common words (a small subset of the full vocabulary, $V$ , defined by the reader-side word embedding matrix) and all words appearing in the input question and passages. $P^v$ denotes the probability distribution of the $t$ -th answer word, $y_t$ , over the extended vocabulary. It is defined as: $$P^v(y_t) =\mathrm {softmax}({W^2}^\top (W^1 s_t + b^1)),$$ (Eq. 31) where the output embedding $W^2 \in \mathbb {R}^{d_\mathrm {word} \times V_\mathrm {ext}}$ is tied with the corresponding part of the input embedding BIBREF25 , and $W^1 \in \mathbb {R}^{d_\mathrm {word} \times d}$ and $b^1 \in \mathbb {R}^{d_\mathrm {word}}$ are learnable parameters. $P^v(y_t)$ is zero if $y_t$ is an out-of-vocabulary word for $V$ . The copy mechanism used in the original pointer-generator is based on the attention weights of a single-layer attentional RNN decoder BIBREF9 . The attention weights in our decoder stack are the intermediate outputs in multi-head attentions and are not suitable for the copy mechanism. Therefore, our model also uses additive attentions for the question and multiple passages on top of the decoder stack. The layer takes $s_t$ as the query and outputs $\alpha ^q_t \in \mathbb {R}^J$ ( $\alpha ^p_t \in \mathbb {R}^{KL}$ ) as the attention weights and $c^q_t \in \mathbb {R}^d$ ( $c^p_t \in \mathbb {R}^d$ ) as the context vectors for the question (passages): $$e^q_j &= {w^q}^\top \tanh (W^{qm} M_j^q + W^{qs} s_t +b^q), \\ \alpha ^q_t &= \mathrm {softmax}(e^q), \\ c^q_t &= \textstyle \sum _j \alpha ^q_{tj} M_j^q, \\ e^{p_k}_l &= {w^p}^\top \tanh (W^{pm} M_l^{p_k} + W^{ps} s_t +b^p), \\ \alpha ^p_t &= \mathrm {softmax}([e^{p_1}; \ldots ; e^{p_K}]), \\ c^p_t &= \textstyle \sum _{l} \alpha ^p_{tl} M^{p_\mathrm {all}}_{l},$$ (Eq. 33) where $w^q$ , $w^p \in \mathbb {R}^d$ , $W^{qm}$ , $W^{qs}$ , $W^{pm}$ , $W^{ps} \in \mathbb {R}^{d \times d}$ , and $b^q$ , $b^p \in \mathbb {R}^d$ are learnable parameters. $P^q$ and $P^p$ are the copy distributions over the extended vocabulary, defined as: $$P^q(y_t) &= \textstyle \sum _{j: x^q_j = y_t} \alpha ^q_{tj}, \\ P^p(y_t) &= \textstyle \sum _{l: x^{p_{k(l)}}_{l} = y_t} \alpha ^p_{tl},$$ (Eq. 34) where $k(l)$ means the passage index corresponding to the $l$ -th word in the concatenated passages. The final distribution of the $t$ -th answer word, $y_t$ , is defined as a mixture of the three distributions: $$P(y_t) = \lambda ^v P^v(y_t) + \lambda ^q P^q(y_t) + \lambda ^p P^p(y_t),$$ (Eq. 36) where the mixture weights are given by $$\lambda ^v, \lambda ^q, \lambda ^p = \mathrm {softmax}(W^m [s_t; c^q_t; c^p_t] + b^m).$$ (Eq. 37) $W^m \in \mathbb {R}^{3 \times 3d}$ , $b^m \in \mathbb {R}^3$ are learnable parameters. In order not to use words in irrelevant passages, our model introduces the concept of combined attention BIBREF26 . While the original technique combines the word and sentence level attentions, our model combines the passage-level relevance $\beta ^{p_k}$ and word-level attentions $\alpha ^p_t$ by using simple scalar multiplication and re-normalization. The updated word attention is: $$\alpha ^p_{tl} & := \frac{\alpha ^p_{tl} \beta ^{p_{k(l)} }}{\sum _{l^{\prime }} \alpha ^p_{tl^{\prime }} \beta ^{p_{k(l^{\prime })}}}.$$ (Eq. 39) Loss Function We define the training loss as the sum of losses in $$L(\theta ) = L_\mathrm {dec} + \gamma _\mathrm {rank} L_\mathrm {rank} + \gamma _\mathrm {cls} L_\mathrm {cls}$$ (Eq. 41) where $\theta $ is the set of all learnable parameters, and $\gamma _\mathrm {rank}$ and $\gamma _\mathrm {cls}$ are balancing parameters. The loss of the decoder, $L_\mathrm {dec}$ , is the negative log likelihood of the whole target answer sentence averaged over $N_\mathrm {able}$ answerable examples: $$L_\mathrm {dec} = - \frac{1}{N_\mathrm {able}}\sum _{(a,y)\in \mathcal {D}} \frac{a}{T} \sum _t \log P(y_{t}),$$ (Eq. 42) where $\mathcal {D}$ is the training dataset. The losses of the passage ranker, $L_\mathrm {rank}$ , and the answer possibility classifier, $L_\mathrm {cls}$ , are the binary cross entropy between the true and predicted values averaged over all $N$ examples: $$L_\mathrm {rank} = - \frac{1}{NK} \sum _k \sum _{r^{p_k}\in \mathcal {D}} \biggl ( \begin{split} &r^{p_k} \log \beta ^{p_k} + \\ &(1-r^{p_k}) \log (1-\beta ^{p_k}) \end{split} \biggr ),\\ L_\mathrm {cls} = - \frac{1}{N} \sum _{a \in \mathcal {D}} \biggl ( \begin{split} &a \log P(a) + \\ &(1-a) \log (1-P(a)) \end{split} \biggr ).$$ (Eq. 43) Setup We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\subset $ ANS $\subset $ ALL. We trained our model on a machine with eight NVIDIA P100 GPUs. Our model was jointly trained with the two answer styles in the ALL set for a total of eight epochs with a batch size of 80. The training took roughly six days. The ensemble model consists of six training runs with the identical architecture and hyperparameters. The hidden size $d$ was 304, and the number of attention heads was 8. The inner state size of the feed-forward networks was 256. The numbers of shared encoding blocks, modeling blocks for question, modeling blocks for passages, and decoder blocks were 3, 2, 5, and 8, respectively. We used the pre-trained uncased 300-dimensional GloVe BIBREF15 and the original 512-dimensional ELMo BIBREF16 . We used the spaCy tokenizer, and all words were lowercased except the input for ELMo. The number of common words in $V_\mathrm {ext}$ was 5,000. We used the Adam optimization BIBREF27 with $\beta _1 = 0.9$ , $\beta _2 = 0.999$ , and $\epsilon = 10^{-8}$ . Weights were initialized using $N(0, 0.02)$ , except that the biases of all the linear transformations were initialized with zero vectors. The learning rate was increased linearly from zero to $2.5 \times 10^{-4}$ in the first 2,000 steps and annealed to 0 using a cosine schedule. All parameter gradients were clipped to a maximum norm of 1. An exponential moving average was applied to all trainable variables with a decay rate 0.9995. The balancing factors of joint learning, $\lambda _\mathrm {rank}$ and $\lambda _\mathrm {cls}$ , were set to 0.5 and 0.1. We used a modified version of the L $_2$ regularization proposed in BIBREF28 , with $w = 0.01$ . We additionally used a dropout BIBREF29 rate of 0.3 for all highway networks and residual and scaled dot-product attention operations in the multi-head attention mechanism. We also used one-sided label smoothing BIBREF30 for the passage relevance and answer possibility labels. We smoothed only the positive labels to 0.9. Results Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 . Table 3 shows the results of the ablation test for our model (controlled with the NLG style) on the well-formed answers of the WFA dev. set. Our model, which was trained with the ALL set consisting of the two styles, outperformed the model trained with the WFA set consisting of the single style. Multi-style learning allowed our model to improve NLG performance by also using non-sentence answers. Table 3 shows that our model outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN BIBREF11 . Our deep Transformer decoder captured the interaction among the question, the passages, and the answer better than a single-layer LSTM decoder. Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. The joint learning has a regularization effect on the question-passages reader. We also confirmed that the gold passage ranker, which can predict passage relevances perfectly, improves RC performance significantly. Passage re-ranking will be a key to developing a system that can outperform humans. Table 4 shows the passage re-ranking performance for the ten given passages on the ANS dev. set. Our ranker improved the initial ranking provided by Bing by a significant margin. Also, the ranker shares the question-passages reader with the answer decoder, and this sharing contributed to the improvements over the ranker trained without the answer decoder. This result is similar to those reported in BIBREF33 . Moreover, the joint learning with the answer possibility classifier and multiple answer styles, which enables our model to learn from a larger number of data, improved the re-ranking. Figure 4 shows the precision-recall curve of answer possibility classification on the ALL dev. set, where the positive class is the answerable data. Our model identified the answerable questions well. The maximum $F_1$ score was 0.7893. This is the first report on answer possibility classification with MS MARCO 2.1. Figure 5 shows the lengths of the answers generated by our model, which are broken down by answer style and query type. The generated answers were relatively shorter than the reference answers but well controlled with the target style in every query type. Also, we should note that our model does not guarantee the consistency in terms of meaning across the answer styles. We randomly selected 100 questions and compared the answers our model generated with the NLG and Q&A styles. The consistency ratio was 0.81, where major errors were due to copying words from different parts of the passages and generating different words, especially yes/no, from a fixed vocabulary. Appendix "Reading Comprehension Examples generated by Masque from MS MARCO 2.1" shows examples of generated answers. We found (d) style errors; (e) yes/no classification errors; (f) copy errors with respect to numerical values; and (c,e) grammatical errors that were originally contained in the inputs. Conclusion We believe our study makes two contributions to the study of multi-passage RC with NLG. Our model enables 1) multi-source abstractive summarization based RC and 2) style-controllable RC. The key strength of our model is its high accuracy of generating abstractive summaries from the question and passages; our model achieved state-of-the-art performance in terms of Rouge-L on the Q&A and NLG tasks of MS MARCO 2.1 that have different answer styles BIBREF5 . The styles considered in this paper are only related to the context of the question in the answer sentence; our model will be promising for controlling other styles such as length and speaking styles. Future work will involve exploring the potential of hybrid models combining extractive and abstractive approaches and improving the passage re-ranking and answerable question identification.
Yes
5455b3cdcf426f4d5fc40bc11644a432fa7a5c8f
5455b3cdcf426f4d5fc40bc11644a432fa7a5c8f_0
Q: What do they mean by answer styles? Text: Introduction Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 . The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Siri and Alexa. However, in comparison with answer span extraction, the natural language generation (NLG) ability for RC has been less studied. While datasets such as MS MARCO BIBREF5 have been proposed for providing abstractive answers in natural language, the state-of-the-art methods BIBREF6 , BIBREF7 are based on answer span extraction, even for the datasets. Generative models such as S-Net BIBREF8 suffer from a dearth of training data to cover open-domain questions. Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as concise phrases that do not contain the context of the question and well-formed sentences that make sense even without the context of the question. These capabilities complement each other; however, the methods used in previous studies cannot utilize and control different answer styles within a model. In this study, we propose a generative model, called Masque, for multi-passage RC. On the MS MARCO 2.1 dataset, Masque achieves state-of-the-art performance on the dataset's two tasks, Q&A and NLG, with different answer styles. The main contributions of this study are that our model enables the following two abilities. Problem Formulation The task considered in this paper, is defined as: Problem 1 Given a question with $J$ words $x^q = \lbrace x^q_1, \ldots , x^q_J\rbrace $ , a set of $K$ passages, where each $k$ -th passage is composed of $L$ words $x^{p_k} = \lbrace x^{p_k}_1, \ldots , x^{p_k}_{L}\rbrace $ , and an answer style $s$ , an RC system outputs an answer $y = \lbrace y_1, \ldots , y_T \rbrace $ conditioned on the style. In short, for inference, given a set of 3-tuples $(x^q, \lbrace x^{p_k}\rbrace , s)$ , the system predicts $P(y)$ . The training data is a set of 6-tuples: $(x^q, \lbrace x^{p_k}\rbrace , s, y, a, \lbrace r^{p_k}\rbrace )$ , where $a$ is 1 if the question is answerable with the provided passages and 0 otherwise, and $r^{p_k}$ is 1 if the $k$ -th passage is required to formulate the answer and 0 otherwise. Proposed Model Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style. Masque directly models the conditional probability $p(y|x^q, \lbrace x^{p_k}\rbrace , s)$ . In addition to multi-style learning, it considers passage ranking and answer possibility classification together as multi-task learning in order to improve accuracy. Figure 2 shows the model architecture. It consists of the following modules. 1 The question-passages reader (§ "Question-Passages Reader" ) models interactions between the question and passages. 2 The passage ranker (§ "Passage Ranker" ) finds relevant passages to the question. 3 The answer possibility classifier (§ "Answer Possibility Classifier" ) identifies answerable questions. 4 The answer sentence decoder (§ "Answer Sentence Decoder" ) outputs a sequence of words conditioned on the style. Question-Passages Reader Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured. Let $x^q$ and $x^{p_k}$ represent one-hot vectors of words in the question and $k$ -th passage. First, this layer projects each of the one-hot vectors (of size $V$ ) into a $d_\mathrm {word}$ -dimensional continuous vector space with a pre-trained weight matrix $W^e \in \mathbb {R}^{d_\mathrm {word} \times V}$ such as GloVe BIBREF15 . Next, it uses contextualized word representations, ELMo BIBREF16 , which is a character-level two-layer bidirectional language model pre-trained on a large-scale corpus. ELMo representations allow our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and contextualized embedding vectors is passed to a two-layer highway network BIBREF17 that is shared for the question and passages. This layer uses a stack of Transformer blocks, which are shared for the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a $d$ -dimensional vector by a linear transformation. The outputs of this layer are sequences of $d$ -dimensional vectors: $E^{p_k} \in \mathbb {R}^{d \times L}$ for the $k$ -th passage and $E^q \in \mathbb {R}^{d \times J}$ for the question. It consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism defined in BIBREF12 . The feed-forward network consists of two linear transformations with a GELU BIBREF18 activation in between, following OpenAI GPT BIBREF19 . Each sub-layer is placed inside a residual block BIBREF20 . For an input $x$ and a given sub-layer function $f$ , the output is $\mathrm {LayerNorm}(f(x)+x)$ , where $\mathrm {LayerNorm}$ indicates the layer normalization proposed in BIBREF21 . To facilitate these residual connections, all sub-layers produce outputs of dimension $d$ . Note that our model does not use any position embeddings because ELMo gives the positional information of the words in each sequence. This layer fuses information from the passages to the question as well as from the question to the passages in a dual mechanism. It first computes a similarity matrix $U^{p_k} \in \mathbb {R}^{L{\times }J}$ between the question and $k$ -th passage, as is done in BIBREF22 , where $$U^{p_k}_{lj} = {w^a}^\top [ E^{p_k}_l; E^q_j; E^{p_k}_l \odot E^q_j ]$$ (Eq. 15) indicates the similarity between the $l$ -th word of the $k$ -th passage and the $j$ -th question word. $w^a \in \mathbb {R}^{3d}$ are learnable parameters. The $\odot $ operator denotes the Hadamard product, and the $[;]$ operator means vector concatenation across the rows. Next, it obtains the row and column normalized similarity matrices $A^{p_k} = \mathrm {softmax}_j({U^{p_k}}^\top ) \in \mathbb {R}^{J\times L}$ and $B^{p_k} = \mathrm {softmax}_{l}(U^{p_k}) \in \mathbb {R}^{L \times J}$ . We use DCN BIBREF23 as the dual attention mechanism to obtain question-to-passage representations $G^{q \rightarrow p_k} \in \mathbb {R}^{5d \times L}$ : $$\nonumber [E^{p_k}; \bar{A}^{p_k}; \bar{\bar{A}}^{p_k}; E^{p_k} \odot \bar{A}^{p_k}; E^{p_k} \odot \bar{\bar{A}}^{p_k}]$$ (Eq. 16) and passage-to-question ones $G^{p \rightarrow q} \in \mathbb {R}^{5d \times J}$ : $$\begin{split} \nonumber & [ E^{q} ; \max _k(\bar{B}^{p_k}); \max _k(\bar{\bar{B}}^{p_k}); \\ &\hspace{10.0pt} E^{q} \odot \max _k(\bar{B}^{p_k}); E^{q} \odot \max _k(\bar{\bar{B}}^{p_k}) ] \mathrm {\ \ where} \end{split}\\ \nonumber &\bar{A}^{p_k} = E^q A^{p_k}\in \mathbb {R}^{d \times L}, \ \bar{B}^{p_k} = E^{p_k} B^{p_k} \in \mathbb {R}^{d \times J} \\ \nonumber &\bar{\bar{A}}^{p_k} = \bar{B}^{p_k} A^{p_k} \in \mathbb {R}^{d \times L}, \ \bar{\bar{B}}^{p_k} = \bar{A}^{p_k} B^{p_k} \in \mathbb {R}^{d \times J}.$$ (Eq. 17) This layer uses a stack of Transformer encoder blocks for question representations and obtains $M^q \in \mathbb {R}^{d \times J}$ from $G^{p \rightarrow q}$ . It also uses an another stack for passage representations and obtains $M^{p_k} \in \mathbb {R}^{d \times L}$ from $G^{q \rightarrow p_k}$ for each $k$ -th passage. The outputs of this layer, $M^q$ and $\lbrace M^{p_k}\rbrace $ , are passed on to the answer sentence decoder; the $\lbrace M^{p_k}\rbrace $ are also passed on to the passage ranker and answer possibility classifier. Passage Ranker The passage ranker maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: $$\beta ^{p_k} = \mathrm {sigmoid}({w^r}^\top M^{p_k}_1),$$ (Eq. 20) where $w^r \in \mathbb {R}^{d}$ are learnable parameters. Answer Possibility Classifier The answer possibility classifier maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: $$P(a) = \mathrm {sigmoid}({w^c}^\top [M^{p_1}_1; \ldots ; M^{p_K}_1]),$$ (Eq. 22) where $w^c \in \mathbb {R}^{Kd}$ are learnable parameters. Answer Sentence Decoder Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step. Let $y = \lbrace y_1, \ldots , y_{T}\rbrace $ represent one-hot vectors of words in the answer. This layer has the same components as the word embedding layer of the question-passages reader, except that it uses a unidirectional ELMo in order to ensure that the predictions for position $t$ depend only on the known outputs at positions less than $t$ . Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style. This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a $d$ -dimensional vector by a linear transformation, and the output of this layer is a sequence of $d$ -dimensional vectors: $\lbrace s_1, \ldots , s_T\rbrace $ . In addition to the encoder block, this block consists of second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2 . As in BIBREF12 , the self-attention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over $M^q$ and $M^{p_\mathrm {all}}$ , respectively. The $M^{p_\mathrm {all}}$ is the concatenated outputs of the encoder stack for the passages, $$M^{p_\mathrm {all}} = [M^{p_1}, \ldots , M^{p_K}] \in \mathbb {R}^{d \times KL}.$$ (Eq. 27) The $[,]$ operator means vector concatenation across the columns. This attention for the concatenated passages enables our model to produce attention weights that are comparable between passages. Our extended mechanism allows both words to be generated from a fixed vocabulary and words to be copied from both the question and multiple passages. Figure 3 shows the overview. Let the extended vocabulary, $V_\mathrm {ext}$ , be the union of the common words (a small subset of the full vocabulary, $V$ , defined by the reader-side word embedding matrix) and all words appearing in the input question and passages. $P^v$ denotes the probability distribution of the $t$ -th answer word, $y_t$ , over the extended vocabulary. It is defined as: $$P^v(y_t) =\mathrm {softmax}({W^2}^\top (W^1 s_t + b^1)),$$ (Eq. 31) where the output embedding $W^2 \in \mathbb {R}^{d_\mathrm {word} \times V_\mathrm {ext}}$ is tied with the corresponding part of the input embedding BIBREF25 , and $W^1 \in \mathbb {R}^{d_\mathrm {word} \times d}$ and $b^1 \in \mathbb {R}^{d_\mathrm {word}}$ are learnable parameters. $P^v(y_t)$ is zero if $y_t$ is an out-of-vocabulary word for $V$ . The copy mechanism used in the original pointer-generator is based on the attention weights of a single-layer attentional RNN decoder BIBREF9 . The attention weights in our decoder stack are the intermediate outputs in multi-head attentions and are not suitable for the copy mechanism. Therefore, our model also uses additive attentions for the question and multiple passages on top of the decoder stack. The layer takes $s_t$ as the query and outputs $\alpha ^q_t \in \mathbb {R}^J$ ( $\alpha ^p_t \in \mathbb {R}^{KL}$ ) as the attention weights and $c^q_t \in \mathbb {R}^d$ ( $c^p_t \in \mathbb {R}^d$ ) as the context vectors for the question (passages): $$e^q_j &= {w^q}^\top \tanh (W^{qm} M_j^q + W^{qs} s_t +b^q), \\ \alpha ^q_t &= \mathrm {softmax}(e^q), \\ c^q_t &= \textstyle \sum _j \alpha ^q_{tj} M_j^q, \\ e^{p_k}_l &= {w^p}^\top \tanh (W^{pm} M_l^{p_k} + W^{ps} s_t +b^p), \\ \alpha ^p_t &= \mathrm {softmax}([e^{p_1}; \ldots ; e^{p_K}]), \\ c^p_t &= \textstyle \sum _{l} \alpha ^p_{tl} M^{p_\mathrm {all}}_{l},$$ (Eq. 33) where $w^q$ , $w^p \in \mathbb {R}^d$ , $W^{qm}$ , $W^{qs}$ , $W^{pm}$ , $W^{ps} \in \mathbb {R}^{d \times d}$ , and $b^q$ , $b^p \in \mathbb {R}^d$ are learnable parameters. $P^q$ and $P^p$ are the copy distributions over the extended vocabulary, defined as: $$P^q(y_t) &= \textstyle \sum _{j: x^q_j = y_t} \alpha ^q_{tj}, \\ P^p(y_t) &= \textstyle \sum _{l: x^{p_{k(l)}}_{l} = y_t} \alpha ^p_{tl},$$ (Eq. 34) where $k(l)$ means the passage index corresponding to the $l$ -th word in the concatenated passages. The final distribution of the $t$ -th answer word, $y_t$ , is defined as a mixture of the three distributions: $$P(y_t) = \lambda ^v P^v(y_t) + \lambda ^q P^q(y_t) + \lambda ^p P^p(y_t),$$ (Eq. 36) where the mixture weights are given by $$\lambda ^v, \lambda ^q, \lambda ^p = \mathrm {softmax}(W^m [s_t; c^q_t; c^p_t] + b^m).$$ (Eq. 37) $W^m \in \mathbb {R}^{3 \times 3d}$ , $b^m \in \mathbb {R}^3$ are learnable parameters. In order not to use words in irrelevant passages, our model introduces the concept of combined attention BIBREF26 . While the original technique combines the word and sentence level attentions, our model combines the passage-level relevance $\beta ^{p_k}$ and word-level attentions $\alpha ^p_t$ by using simple scalar multiplication and re-normalization. The updated word attention is: $$\alpha ^p_{tl} & := \frac{\alpha ^p_{tl} \beta ^{p_{k(l)} }}{\sum _{l^{\prime }} \alpha ^p_{tl^{\prime }} \beta ^{p_{k(l^{\prime })}}}.$$ (Eq. 39) Loss Function We define the training loss as the sum of losses in $$L(\theta ) = L_\mathrm {dec} + \gamma _\mathrm {rank} L_\mathrm {rank} + \gamma _\mathrm {cls} L_\mathrm {cls}$$ (Eq. 41) where $\theta $ is the set of all learnable parameters, and $\gamma _\mathrm {rank}$ and $\gamma _\mathrm {cls}$ are balancing parameters. The loss of the decoder, $L_\mathrm {dec}$ , is the negative log likelihood of the whole target answer sentence averaged over $N_\mathrm {able}$ answerable examples: $$L_\mathrm {dec} = - \frac{1}{N_\mathrm {able}}\sum _{(a,y)\in \mathcal {D}} \frac{a}{T} \sum _t \log P(y_{t}),$$ (Eq. 42) where $\mathcal {D}$ is the training dataset. The losses of the passage ranker, $L_\mathrm {rank}$ , and the answer possibility classifier, $L_\mathrm {cls}$ , are the binary cross entropy between the true and predicted values averaged over all $N$ examples: $$L_\mathrm {rank} = - \frac{1}{NK} \sum _k \sum _{r^{p_k}\in \mathcal {D}} \biggl ( \begin{split} &r^{p_k} \log \beta ^{p_k} + \\ &(1-r^{p_k}) \log (1-\beta ^{p_k}) \end{split} \biggr ),\\ L_\mathrm {cls} = - \frac{1}{N} \sum _{a \in \mathcal {D}} \biggl ( \begin{split} &a \log P(a) + \\ &(1-a) \log (1-P(a)) \end{split} \biggr ).$$ (Eq. 43) Setup We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\subset $ ANS $\subset $ ALL. We trained our model on a machine with eight NVIDIA P100 GPUs. Our model was jointly trained with the two answer styles in the ALL set for a total of eight epochs with a batch size of 80. The training took roughly six days. The ensemble model consists of six training runs with the identical architecture and hyperparameters. The hidden size $d$ was 304, and the number of attention heads was 8. The inner state size of the feed-forward networks was 256. The numbers of shared encoding blocks, modeling blocks for question, modeling blocks for passages, and decoder blocks were 3, 2, 5, and 8, respectively. We used the pre-trained uncased 300-dimensional GloVe BIBREF15 and the original 512-dimensional ELMo BIBREF16 . We used the spaCy tokenizer, and all words were lowercased except the input for ELMo. The number of common words in $V_\mathrm {ext}$ was 5,000. We used the Adam optimization BIBREF27 with $\beta _1 = 0.9$ , $\beta _2 = 0.999$ , and $\epsilon = 10^{-8}$ . Weights were initialized using $N(0, 0.02)$ , except that the biases of all the linear transformations were initialized with zero vectors. The learning rate was increased linearly from zero to $2.5 \times 10^{-4}$ in the first 2,000 steps and annealed to 0 using a cosine schedule. All parameter gradients were clipped to a maximum norm of 1. An exponential moving average was applied to all trainable variables with a decay rate 0.9995. The balancing factors of joint learning, $\lambda _\mathrm {rank}$ and $\lambda _\mathrm {cls}$ , were set to 0.5 and 0.1. We used a modified version of the L $_2$ regularization proposed in BIBREF28 , with $w = 0.01$ . We additionally used a dropout BIBREF29 rate of 0.3 for all highway networks and residual and scaled dot-product attention operations in the multi-head attention mechanism. We also used one-sided label smoothing BIBREF30 for the passage relevance and answer possibility labels. We smoothed only the positive labels to 0.9. Results Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 . Table 3 shows the results of the ablation test for our model (controlled with the NLG style) on the well-formed answers of the WFA dev. set. Our model, which was trained with the ALL set consisting of the two styles, outperformed the model trained with the WFA set consisting of the single style. Multi-style learning allowed our model to improve NLG performance by also using non-sentence answers. Table 3 shows that our model outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN BIBREF11 . Our deep Transformer decoder captured the interaction among the question, the passages, and the answer better than a single-layer LSTM decoder. Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. The joint learning has a regularization effect on the question-passages reader. We also confirmed that the gold passage ranker, which can predict passage relevances perfectly, improves RC performance significantly. Passage re-ranking will be a key to developing a system that can outperform humans. Table 4 shows the passage re-ranking performance for the ten given passages on the ANS dev. set. Our ranker improved the initial ranking provided by Bing by a significant margin. Also, the ranker shares the question-passages reader with the answer decoder, and this sharing contributed to the improvements over the ranker trained without the answer decoder. This result is similar to those reported in BIBREF33 . Moreover, the joint learning with the answer possibility classifier and multiple answer styles, which enables our model to learn from a larger number of data, improved the re-ranking. Figure 4 shows the precision-recall curve of answer possibility classification on the ALL dev. set, where the positive class is the answerable data. Our model identified the answerable questions well. The maximum $F_1$ score was 0.7893. This is the first report on answer possibility classification with MS MARCO 2.1. Figure 5 shows the lengths of the answers generated by our model, which are broken down by answer style and query type. The generated answers were relatively shorter than the reference answers but well controlled with the target style in every query type. Also, we should note that our model does not guarantee the consistency in terms of meaning across the answer styles. We randomly selected 100 questions and compared the answers our model generated with the NLG and Q&A styles. The consistency ratio was 0.81, where major errors were due to copying words from different parts of the passages and generating different words, especially yes/no, from a fixed vocabulary. Appendix "Reading Comprehension Examples generated by Masque from MS MARCO 2.1" shows examples of generated answers. We found (d) style errors; (e) yes/no classification errors; (f) copy errors with respect to numerical values; and (c,e) grammatical errors that were originally contained in the inputs. Conclusion We believe our study makes two contributions to the study of multi-passage RC with NLG. Our model enables 1) multi-source abstractive summarization based RC and 2) style-controllable RC. The key strength of our model is its high accuracy of generating abstractive summaries from the question and passages; our model achieved state-of-the-art performance in terms of Rouge-L on the Q&A and NLG tasks of MS MARCO 2.1 that have different answer styles BIBREF5 . The styles considered in this paper are only related to the context of the question in the answer sentence; our model will be promising for controlling other styles such as length and speaking styles. Future work will involve exploring the potential of hybrid models combining extractive and abstractive approaches and improving the passage re-ranking and answerable question identification.
well-formed sentences vs concise answers
6c80bc3ed6df228c8ca6e02c0a8a1c2889498688
6c80bc3ed6df228c8ca6e02c0a8a1c2889498688_0
Q: Is there exactly one "answer style" per dataset? Text: Introduction Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 . The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Siri and Alexa. However, in comparison with answer span extraction, the natural language generation (NLG) ability for RC has been less studied. While datasets such as MS MARCO BIBREF5 have been proposed for providing abstractive answers in natural language, the state-of-the-art methods BIBREF6 , BIBREF7 are based on answer span extraction, even for the datasets. Generative models such as S-Net BIBREF8 suffer from a dearth of training data to cover open-domain questions. Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as concise phrases that do not contain the context of the question and well-formed sentences that make sense even without the context of the question. These capabilities complement each other; however, the methods used in previous studies cannot utilize and control different answer styles within a model. In this study, we propose a generative model, called Masque, for multi-passage RC. On the MS MARCO 2.1 dataset, Masque achieves state-of-the-art performance on the dataset's two tasks, Q&A and NLG, with different answer styles. The main contributions of this study are that our model enables the following two abilities. Problem Formulation The task considered in this paper, is defined as: Problem 1 Given a question with $J$ words $x^q = \lbrace x^q_1, \ldots , x^q_J\rbrace $ , a set of $K$ passages, where each $k$ -th passage is composed of $L$ words $x^{p_k} = \lbrace x^{p_k}_1, \ldots , x^{p_k}_{L}\rbrace $ , and an answer style $s$ , an RC system outputs an answer $y = \lbrace y_1, \ldots , y_T \rbrace $ conditioned on the style. In short, for inference, given a set of 3-tuples $(x^q, \lbrace x^{p_k}\rbrace , s)$ , the system predicts $P(y)$ . The training data is a set of 6-tuples: $(x^q, \lbrace x^{p_k}\rbrace , s, y, a, \lbrace r^{p_k}\rbrace )$ , where $a$ is 1 if the question is answerable with the provided passages and 0 otherwise, and $r^{p_k}$ is 1 if the $k$ -th passage is required to formulate the answer and 0 otherwise. Proposed Model Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style. Masque directly models the conditional probability $p(y|x^q, \lbrace x^{p_k}\rbrace , s)$ . In addition to multi-style learning, it considers passage ranking and answer possibility classification together as multi-task learning in order to improve accuracy. Figure 2 shows the model architecture. It consists of the following modules. 1 The question-passages reader (§ "Question-Passages Reader" ) models interactions between the question and passages. 2 The passage ranker (§ "Passage Ranker" ) finds relevant passages to the question. 3 The answer possibility classifier (§ "Answer Possibility Classifier" ) identifies answerable questions. 4 The answer sentence decoder (§ "Answer Sentence Decoder" ) outputs a sequence of words conditioned on the style. Question-Passages Reader Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured. Let $x^q$ and $x^{p_k}$ represent one-hot vectors of words in the question and $k$ -th passage. First, this layer projects each of the one-hot vectors (of size $V$ ) into a $d_\mathrm {word}$ -dimensional continuous vector space with a pre-trained weight matrix $W^e \in \mathbb {R}^{d_\mathrm {word} \times V}$ such as GloVe BIBREF15 . Next, it uses contextualized word representations, ELMo BIBREF16 , which is a character-level two-layer bidirectional language model pre-trained on a large-scale corpus. ELMo representations allow our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and contextualized embedding vectors is passed to a two-layer highway network BIBREF17 that is shared for the question and passages. This layer uses a stack of Transformer blocks, which are shared for the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a $d$ -dimensional vector by a linear transformation. The outputs of this layer are sequences of $d$ -dimensional vectors: $E^{p_k} \in \mathbb {R}^{d \times L}$ for the $k$ -th passage and $E^q \in \mathbb {R}^{d \times J}$ for the question. It consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism defined in BIBREF12 . The feed-forward network consists of two linear transformations with a GELU BIBREF18 activation in between, following OpenAI GPT BIBREF19 . Each sub-layer is placed inside a residual block BIBREF20 . For an input $x$ and a given sub-layer function $f$ , the output is $\mathrm {LayerNorm}(f(x)+x)$ , where $\mathrm {LayerNorm}$ indicates the layer normalization proposed in BIBREF21 . To facilitate these residual connections, all sub-layers produce outputs of dimension $d$ . Note that our model does not use any position embeddings because ELMo gives the positional information of the words in each sequence. This layer fuses information from the passages to the question as well as from the question to the passages in a dual mechanism. It first computes a similarity matrix $U^{p_k} \in \mathbb {R}^{L{\times }J}$ between the question and $k$ -th passage, as is done in BIBREF22 , where $$U^{p_k}_{lj} = {w^a}^\top [ E^{p_k}_l; E^q_j; E^{p_k}_l \odot E^q_j ]$$ (Eq. 15) indicates the similarity between the $l$ -th word of the $k$ -th passage and the $j$ -th question word. $w^a \in \mathbb {R}^{3d}$ are learnable parameters. The $\odot $ operator denotes the Hadamard product, and the $[;]$ operator means vector concatenation across the rows. Next, it obtains the row and column normalized similarity matrices $A^{p_k} = \mathrm {softmax}_j({U^{p_k}}^\top ) \in \mathbb {R}^{J\times L}$ and $B^{p_k} = \mathrm {softmax}_{l}(U^{p_k}) \in \mathbb {R}^{L \times J}$ . We use DCN BIBREF23 as the dual attention mechanism to obtain question-to-passage representations $G^{q \rightarrow p_k} \in \mathbb {R}^{5d \times L}$ : $$\nonumber [E^{p_k}; \bar{A}^{p_k}; \bar{\bar{A}}^{p_k}; E^{p_k} \odot \bar{A}^{p_k}; E^{p_k} \odot \bar{\bar{A}}^{p_k}]$$ (Eq. 16) and passage-to-question ones $G^{p \rightarrow q} \in \mathbb {R}^{5d \times J}$ : $$\begin{split} \nonumber & [ E^{q} ; \max _k(\bar{B}^{p_k}); \max _k(\bar{\bar{B}}^{p_k}); \\ &\hspace{10.0pt} E^{q} \odot \max _k(\bar{B}^{p_k}); E^{q} \odot \max _k(\bar{\bar{B}}^{p_k}) ] \mathrm {\ \ where} \end{split}\\ \nonumber &\bar{A}^{p_k} = E^q A^{p_k}\in \mathbb {R}^{d \times L}, \ \bar{B}^{p_k} = E^{p_k} B^{p_k} \in \mathbb {R}^{d \times J} \\ \nonumber &\bar{\bar{A}}^{p_k} = \bar{B}^{p_k} A^{p_k} \in \mathbb {R}^{d \times L}, \ \bar{\bar{B}}^{p_k} = \bar{A}^{p_k} B^{p_k} \in \mathbb {R}^{d \times J}.$$ (Eq. 17) This layer uses a stack of Transformer encoder blocks for question representations and obtains $M^q \in \mathbb {R}^{d \times J}$ from $G^{p \rightarrow q}$ . It also uses an another stack for passage representations and obtains $M^{p_k} \in \mathbb {R}^{d \times L}$ from $G^{q \rightarrow p_k}$ for each $k$ -th passage. The outputs of this layer, $M^q$ and $\lbrace M^{p_k}\rbrace $ , are passed on to the answer sentence decoder; the $\lbrace M^{p_k}\rbrace $ are also passed on to the passage ranker and answer possibility classifier. Passage Ranker The passage ranker maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: $$\beta ^{p_k} = \mathrm {sigmoid}({w^r}^\top M^{p_k}_1),$$ (Eq. 20) where $w^r \in \mathbb {R}^{d}$ are learnable parameters. Answer Possibility Classifier The answer possibility classifier maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: $$P(a) = \mathrm {sigmoid}({w^c}^\top [M^{p_1}_1; \ldots ; M^{p_K}_1]),$$ (Eq. 22) where $w^c \in \mathbb {R}^{Kd}$ are learnable parameters. Answer Sentence Decoder Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step. Let $y = \lbrace y_1, \ldots , y_{T}\rbrace $ represent one-hot vectors of words in the answer. This layer has the same components as the word embedding layer of the question-passages reader, except that it uses a unidirectional ELMo in order to ensure that the predictions for position $t$ depend only on the known outputs at positions less than $t$ . Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style. This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a $d$ -dimensional vector by a linear transformation, and the output of this layer is a sequence of $d$ -dimensional vectors: $\lbrace s_1, \ldots , s_T\rbrace $ . In addition to the encoder block, this block consists of second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2 . As in BIBREF12 , the self-attention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over $M^q$ and $M^{p_\mathrm {all}}$ , respectively. The $M^{p_\mathrm {all}}$ is the concatenated outputs of the encoder stack for the passages, $$M^{p_\mathrm {all}} = [M^{p_1}, \ldots , M^{p_K}] \in \mathbb {R}^{d \times KL}.$$ (Eq. 27) The $[,]$ operator means vector concatenation across the columns. This attention for the concatenated passages enables our model to produce attention weights that are comparable between passages. Our extended mechanism allows both words to be generated from a fixed vocabulary and words to be copied from both the question and multiple passages. Figure 3 shows the overview. Let the extended vocabulary, $V_\mathrm {ext}$ , be the union of the common words (a small subset of the full vocabulary, $V$ , defined by the reader-side word embedding matrix) and all words appearing in the input question and passages. $P^v$ denotes the probability distribution of the $t$ -th answer word, $y_t$ , over the extended vocabulary. It is defined as: $$P^v(y_t) =\mathrm {softmax}({W^2}^\top (W^1 s_t + b^1)),$$ (Eq. 31) where the output embedding $W^2 \in \mathbb {R}^{d_\mathrm {word} \times V_\mathrm {ext}}$ is tied with the corresponding part of the input embedding BIBREF25 , and $W^1 \in \mathbb {R}^{d_\mathrm {word} \times d}$ and $b^1 \in \mathbb {R}^{d_\mathrm {word}}$ are learnable parameters. $P^v(y_t)$ is zero if $y_t$ is an out-of-vocabulary word for $V$ . The copy mechanism used in the original pointer-generator is based on the attention weights of a single-layer attentional RNN decoder BIBREF9 . The attention weights in our decoder stack are the intermediate outputs in multi-head attentions and are not suitable for the copy mechanism. Therefore, our model also uses additive attentions for the question and multiple passages on top of the decoder stack. The layer takes $s_t$ as the query and outputs $\alpha ^q_t \in \mathbb {R}^J$ ( $\alpha ^p_t \in \mathbb {R}^{KL}$ ) as the attention weights and $c^q_t \in \mathbb {R}^d$ ( $c^p_t \in \mathbb {R}^d$ ) as the context vectors for the question (passages): $$e^q_j &= {w^q}^\top \tanh (W^{qm} M_j^q + W^{qs} s_t +b^q), \\ \alpha ^q_t &= \mathrm {softmax}(e^q), \\ c^q_t &= \textstyle \sum _j \alpha ^q_{tj} M_j^q, \\ e^{p_k}_l &= {w^p}^\top \tanh (W^{pm} M_l^{p_k} + W^{ps} s_t +b^p), \\ \alpha ^p_t &= \mathrm {softmax}([e^{p_1}; \ldots ; e^{p_K}]), \\ c^p_t &= \textstyle \sum _{l} \alpha ^p_{tl} M^{p_\mathrm {all}}_{l},$$ (Eq. 33) where $w^q$ , $w^p \in \mathbb {R}^d$ , $W^{qm}$ , $W^{qs}$ , $W^{pm}$ , $W^{ps} \in \mathbb {R}^{d \times d}$ , and $b^q$ , $b^p \in \mathbb {R}^d$ are learnable parameters. $P^q$ and $P^p$ are the copy distributions over the extended vocabulary, defined as: $$P^q(y_t) &= \textstyle \sum _{j: x^q_j = y_t} \alpha ^q_{tj}, \\ P^p(y_t) &= \textstyle \sum _{l: x^{p_{k(l)}}_{l} = y_t} \alpha ^p_{tl},$$ (Eq. 34) where $k(l)$ means the passage index corresponding to the $l$ -th word in the concatenated passages. The final distribution of the $t$ -th answer word, $y_t$ , is defined as a mixture of the three distributions: $$P(y_t) = \lambda ^v P^v(y_t) + \lambda ^q P^q(y_t) + \lambda ^p P^p(y_t),$$ (Eq. 36) where the mixture weights are given by $$\lambda ^v, \lambda ^q, \lambda ^p = \mathrm {softmax}(W^m [s_t; c^q_t; c^p_t] + b^m).$$ (Eq. 37) $W^m \in \mathbb {R}^{3 \times 3d}$ , $b^m \in \mathbb {R}^3$ are learnable parameters. In order not to use words in irrelevant passages, our model introduces the concept of combined attention BIBREF26 . While the original technique combines the word and sentence level attentions, our model combines the passage-level relevance $\beta ^{p_k}$ and word-level attentions $\alpha ^p_t$ by using simple scalar multiplication and re-normalization. The updated word attention is: $$\alpha ^p_{tl} & := \frac{\alpha ^p_{tl} \beta ^{p_{k(l)} }}{\sum _{l^{\prime }} \alpha ^p_{tl^{\prime }} \beta ^{p_{k(l^{\prime })}}}.$$ (Eq. 39) Loss Function We define the training loss as the sum of losses in $$L(\theta ) = L_\mathrm {dec} + \gamma _\mathrm {rank} L_\mathrm {rank} + \gamma _\mathrm {cls} L_\mathrm {cls}$$ (Eq. 41) where $\theta $ is the set of all learnable parameters, and $\gamma _\mathrm {rank}$ and $\gamma _\mathrm {cls}$ are balancing parameters. The loss of the decoder, $L_\mathrm {dec}$ , is the negative log likelihood of the whole target answer sentence averaged over $N_\mathrm {able}$ answerable examples: $$L_\mathrm {dec} = - \frac{1}{N_\mathrm {able}}\sum _{(a,y)\in \mathcal {D}} \frac{a}{T} \sum _t \log P(y_{t}),$$ (Eq. 42) where $\mathcal {D}$ is the training dataset. The losses of the passage ranker, $L_\mathrm {rank}$ , and the answer possibility classifier, $L_\mathrm {cls}$ , are the binary cross entropy between the true and predicted values averaged over all $N$ examples: $$L_\mathrm {rank} = - \frac{1}{NK} \sum _k \sum _{r^{p_k}\in \mathcal {D}} \biggl ( \begin{split} &r^{p_k} \log \beta ^{p_k} + \\ &(1-r^{p_k}) \log (1-\beta ^{p_k}) \end{split} \biggr ),\\ L_\mathrm {cls} = - \frac{1}{N} \sum _{a \in \mathcal {D}} \biggl ( \begin{split} &a \log P(a) + \\ &(1-a) \log (1-P(a)) \end{split} \biggr ).$$ (Eq. 43) Setup We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\subset $ ANS $\subset $ ALL. We trained our model on a machine with eight NVIDIA P100 GPUs. Our model was jointly trained with the two answer styles in the ALL set for a total of eight epochs with a batch size of 80. The training took roughly six days. The ensemble model consists of six training runs with the identical architecture and hyperparameters. The hidden size $d$ was 304, and the number of attention heads was 8. The inner state size of the feed-forward networks was 256. The numbers of shared encoding blocks, modeling blocks for question, modeling blocks for passages, and decoder blocks were 3, 2, 5, and 8, respectively. We used the pre-trained uncased 300-dimensional GloVe BIBREF15 and the original 512-dimensional ELMo BIBREF16 . We used the spaCy tokenizer, and all words were lowercased except the input for ELMo. The number of common words in $V_\mathrm {ext}$ was 5,000. We used the Adam optimization BIBREF27 with $\beta _1 = 0.9$ , $\beta _2 = 0.999$ , and $\epsilon = 10^{-8}$ . Weights were initialized using $N(0, 0.02)$ , except that the biases of all the linear transformations were initialized with zero vectors. The learning rate was increased linearly from zero to $2.5 \times 10^{-4}$ in the first 2,000 steps and annealed to 0 using a cosine schedule. All parameter gradients were clipped to a maximum norm of 1. An exponential moving average was applied to all trainable variables with a decay rate 0.9995. The balancing factors of joint learning, $\lambda _\mathrm {rank}$ and $\lambda _\mathrm {cls}$ , were set to 0.5 and 0.1. We used a modified version of the L $_2$ regularization proposed in BIBREF28 , with $w = 0.01$ . We additionally used a dropout BIBREF29 rate of 0.3 for all highway networks and residual and scaled dot-product attention operations in the multi-head attention mechanism. We also used one-sided label smoothing BIBREF30 for the passage relevance and answer possibility labels. We smoothed only the positive labels to 0.9. Results Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 . Table 3 shows the results of the ablation test for our model (controlled with the NLG style) on the well-formed answers of the WFA dev. set. Our model, which was trained with the ALL set consisting of the two styles, outperformed the model trained with the WFA set consisting of the single style. Multi-style learning allowed our model to improve NLG performance by also using non-sentence answers. Table 3 shows that our model outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN BIBREF11 . Our deep Transformer decoder captured the interaction among the question, the passages, and the answer better than a single-layer LSTM decoder. Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. The joint learning has a regularization effect on the question-passages reader. We also confirmed that the gold passage ranker, which can predict passage relevances perfectly, improves RC performance significantly. Passage re-ranking will be a key to developing a system that can outperform humans. Table 4 shows the passage re-ranking performance for the ten given passages on the ANS dev. set. Our ranker improved the initial ranking provided by Bing by a significant margin. Also, the ranker shares the question-passages reader with the answer decoder, and this sharing contributed to the improvements over the ranker trained without the answer decoder. This result is similar to those reported in BIBREF33 . Moreover, the joint learning with the answer possibility classifier and multiple answer styles, which enables our model to learn from a larger number of data, improved the re-ranking. Figure 4 shows the precision-recall curve of answer possibility classification on the ALL dev. set, where the positive class is the answerable data. Our model identified the answerable questions well. The maximum $F_1$ score was 0.7893. This is the first report on answer possibility classification with MS MARCO 2.1. Figure 5 shows the lengths of the answers generated by our model, which are broken down by answer style and query type. The generated answers were relatively shorter than the reference answers but well controlled with the target style in every query type. Also, we should note that our model does not guarantee the consistency in terms of meaning across the answer styles. We randomly selected 100 questions and compared the answers our model generated with the NLG and Q&A styles. The consistency ratio was 0.81, where major errors were due to copying words from different parts of the passages and generating different words, especially yes/no, from a fixed vocabulary. Appendix "Reading Comprehension Examples generated by Masque from MS MARCO 2.1" shows examples of generated answers. We found (d) style errors; (e) yes/no classification errors; (f) copy errors with respect to numerical values; and (c,e) grammatical errors that were originally contained in the inputs. Conclusion We believe our study makes two contributions to the study of multi-passage RC with NLG. Our model enables 1) multi-source abstractive summarization based RC and 2) style-controllable RC. The key strength of our model is its high accuracy of generating abstractive summaries from the question and passages; our model achieved state-of-the-art performance in terms of Rouge-L on the Q&A and NLG tasks of MS MARCO 2.1 that have different answer styles BIBREF5 . The styles considered in this paper are only related to the context of the question in the answer sentence; our model will be promising for controlling other styles such as length and speaking styles. Future work will involve exploring the potential of hybrid models combining extractive and abstractive approaches and improving the passage re-ranking and answerable question identification.
Yes
2d274c93901c193cf7ad227ab28b1436c5f410af
2d274c93901c193cf7ad227ab28b1436c5f410af_0
Q: What are the baselines that Masque is compared against? Text: Introduction Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 . The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Siri and Alexa. However, in comparison with answer span extraction, the natural language generation (NLG) ability for RC has been less studied. While datasets such as MS MARCO BIBREF5 have been proposed for providing abstractive answers in natural language, the state-of-the-art methods BIBREF6 , BIBREF7 are based on answer span extraction, even for the datasets. Generative models such as S-Net BIBREF8 suffer from a dearth of training data to cover open-domain questions. Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as concise phrases that do not contain the context of the question and well-formed sentences that make sense even without the context of the question. These capabilities complement each other; however, the methods used in previous studies cannot utilize and control different answer styles within a model. In this study, we propose a generative model, called Masque, for multi-passage RC. On the MS MARCO 2.1 dataset, Masque achieves state-of-the-art performance on the dataset's two tasks, Q&A and NLG, with different answer styles. The main contributions of this study are that our model enables the following two abilities. Problem Formulation The task considered in this paper, is defined as: Problem 1 Given a question with $J$ words $x^q = \lbrace x^q_1, \ldots , x^q_J\rbrace $ , a set of $K$ passages, where each $k$ -th passage is composed of $L$ words $x^{p_k} = \lbrace x^{p_k}_1, \ldots , x^{p_k}_{L}\rbrace $ , and an answer style $s$ , an RC system outputs an answer $y = \lbrace y_1, \ldots , y_T \rbrace $ conditioned on the style. In short, for inference, given a set of 3-tuples $(x^q, \lbrace x^{p_k}\rbrace , s)$ , the system predicts $P(y)$ . The training data is a set of 6-tuples: $(x^q, \lbrace x^{p_k}\rbrace , s, y, a, \lbrace r^{p_k}\rbrace )$ , where $a$ is 1 if the question is answerable with the provided passages and 0 otherwise, and $r^{p_k}$ is 1 if the $k$ -th passage is required to formulate the answer and 0 otherwise. Proposed Model Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style. Masque directly models the conditional probability $p(y|x^q, \lbrace x^{p_k}\rbrace , s)$ . In addition to multi-style learning, it considers passage ranking and answer possibility classification together as multi-task learning in order to improve accuracy. Figure 2 shows the model architecture. It consists of the following modules. 1 The question-passages reader (§ "Question-Passages Reader" ) models interactions between the question and passages. 2 The passage ranker (§ "Passage Ranker" ) finds relevant passages to the question. 3 The answer possibility classifier (§ "Answer Possibility Classifier" ) identifies answerable questions. 4 The answer sentence decoder (§ "Answer Sentence Decoder" ) outputs a sequence of words conditioned on the style. Question-Passages Reader Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured. Let $x^q$ and $x^{p_k}$ represent one-hot vectors of words in the question and $k$ -th passage. First, this layer projects each of the one-hot vectors (of size $V$ ) into a $d_\mathrm {word}$ -dimensional continuous vector space with a pre-trained weight matrix $W^e \in \mathbb {R}^{d_\mathrm {word} \times V}$ such as GloVe BIBREF15 . Next, it uses contextualized word representations, ELMo BIBREF16 , which is a character-level two-layer bidirectional language model pre-trained on a large-scale corpus. ELMo representations allow our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and contextualized embedding vectors is passed to a two-layer highway network BIBREF17 that is shared for the question and passages. This layer uses a stack of Transformer blocks, which are shared for the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a $d$ -dimensional vector by a linear transformation. The outputs of this layer are sequences of $d$ -dimensional vectors: $E^{p_k} \in \mathbb {R}^{d \times L}$ for the $k$ -th passage and $E^q \in \mathbb {R}^{d \times J}$ for the question. It consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism defined in BIBREF12 . The feed-forward network consists of two linear transformations with a GELU BIBREF18 activation in between, following OpenAI GPT BIBREF19 . Each sub-layer is placed inside a residual block BIBREF20 . For an input $x$ and a given sub-layer function $f$ , the output is $\mathrm {LayerNorm}(f(x)+x)$ , where $\mathrm {LayerNorm}$ indicates the layer normalization proposed in BIBREF21 . To facilitate these residual connections, all sub-layers produce outputs of dimension $d$ . Note that our model does not use any position embeddings because ELMo gives the positional information of the words in each sequence. This layer fuses information from the passages to the question as well as from the question to the passages in a dual mechanism. It first computes a similarity matrix $U^{p_k} \in \mathbb {R}^{L{\times }J}$ between the question and $k$ -th passage, as is done in BIBREF22 , where $$U^{p_k}_{lj} = {w^a}^\top [ E^{p_k}_l; E^q_j; E^{p_k}_l \odot E^q_j ]$$ (Eq. 15) indicates the similarity between the $l$ -th word of the $k$ -th passage and the $j$ -th question word. $w^a \in \mathbb {R}^{3d}$ are learnable parameters. The $\odot $ operator denotes the Hadamard product, and the $[;]$ operator means vector concatenation across the rows. Next, it obtains the row and column normalized similarity matrices $A^{p_k} = \mathrm {softmax}_j({U^{p_k}}^\top ) \in \mathbb {R}^{J\times L}$ and $B^{p_k} = \mathrm {softmax}_{l}(U^{p_k}) \in \mathbb {R}^{L \times J}$ . We use DCN BIBREF23 as the dual attention mechanism to obtain question-to-passage representations $G^{q \rightarrow p_k} \in \mathbb {R}^{5d \times L}$ : $$\nonumber [E^{p_k}; \bar{A}^{p_k}; \bar{\bar{A}}^{p_k}; E^{p_k} \odot \bar{A}^{p_k}; E^{p_k} \odot \bar{\bar{A}}^{p_k}]$$ (Eq. 16) and passage-to-question ones $G^{p \rightarrow q} \in \mathbb {R}^{5d \times J}$ : $$\begin{split} \nonumber & [ E^{q} ; \max _k(\bar{B}^{p_k}); \max _k(\bar{\bar{B}}^{p_k}); \\ &\hspace{10.0pt} E^{q} \odot \max _k(\bar{B}^{p_k}); E^{q} \odot \max _k(\bar{\bar{B}}^{p_k}) ] \mathrm {\ \ where} \end{split}\\ \nonumber &\bar{A}^{p_k} = E^q A^{p_k}\in \mathbb {R}^{d \times L}, \ \bar{B}^{p_k} = E^{p_k} B^{p_k} \in \mathbb {R}^{d \times J} \\ \nonumber &\bar{\bar{A}}^{p_k} = \bar{B}^{p_k} A^{p_k} \in \mathbb {R}^{d \times L}, \ \bar{\bar{B}}^{p_k} = \bar{A}^{p_k} B^{p_k} \in \mathbb {R}^{d \times J}.$$ (Eq. 17) This layer uses a stack of Transformer encoder blocks for question representations and obtains $M^q \in \mathbb {R}^{d \times J}$ from $G^{p \rightarrow q}$ . It also uses an another stack for passage representations and obtains $M^{p_k} \in \mathbb {R}^{d \times L}$ from $G^{q \rightarrow p_k}$ for each $k$ -th passage. The outputs of this layer, $M^q$ and $\lbrace M^{p_k}\rbrace $ , are passed on to the answer sentence decoder; the $\lbrace M^{p_k}\rbrace $ are also passed on to the passage ranker and answer possibility classifier. Passage Ranker The passage ranker maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: $$\beta ^{p_k} = \mathrm {sigmoid}({w^r}^\top M^{p_k}_1),$$ (Eq. 20) where $w^r \in \mathbb {R}^{d}$ are learnable parameters. Answer Possibility Classifier The answer possibility classifier maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: $$P(a) = \mathrm {sigmoid}({w^c}^\top [M^{p_1}_1; \ldots ; M^{p_K}_1]),$$ (Eq. 22) where $w^c \in \mathbb {R}^{Kd}$ are learnable parameters. Answer Sentence Decoder Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step. Let $y = \lbrace y_1, \ldots , y_{T}\rbrace $ represent one-hot vectors of words in the answer. This layer has the same components as the word embedding layer of the question-passages reader, except that it uses a unidirectional ELMo in order to ensure that the predictions for position $t$ depend only on the known outputs at positions less than $t$ . Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style. This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a $d$ -dimensional vector by a linear transformation, and the output of this layer is a sequence of $d$ -dimensional vectors: $\lbrace s_1, \ldots , s_T\rbrace $ . In addition to the encoder block, this block consists of second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2 . As in BIBREF12 , the self-attention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over $M^q$ and $M^{p_\mathrm {all}}$ , respectively. The $M^{p_\mathrm {all}}$ is the concatenated outputs of the encoder stack for the passages, $$M^{p_\mathrm {all}} = [M^{p_1}, \ldots , M^{p_K}] \in \mathbb {R}^{d \times KL}.$$ (Eq. 27) The $[,]$ operator means vector concatenation across the columns. This attention for the concatenated passages enables our model to produce attention weights that are comparable between passages. Our extended mechanism allows both words to be generated from a fixed vocabulary and words to be copied from both the question and multiple passages. Figure 3 shows the overview. Let the extended vocabulary, $V_\mathrm {ext}$ , be the union of the common words (a small subset of the full vocabulary, $V$ , defined by the reader-side word embedding matrix) and all words appearing in the input question and passages. $P^v$ denotes the probability distribution of the $t$ -th answer word, $y_t$ , over the extended vocabulary. It is defined as: $$P^v(y_t) =\mathrm {softmax}({W^2}^\top (W^1 s_t + b^1)),$$ (Eq. 31) where the output embedding $W^2 \in \mathbb {R}^{d_\mathrm {word} \times V_\mathrm {ext}}$ is tied with the corresponding part of the input embedding BIBREF25 , and $W^1 \in \mathbb {R}^{d_\mathrm {word} \times d}$ and $b^1 \in \mathbb {R}^{d_\mathrm {word}}$ are learnable parameters. $P^v(y_t)$ is zero if $y_t$ is an out-of-vocabulary word for $V$ . The copy mechanism used in the original pointer-generator is based on the attention weights of a single-layer attentional RNN decoder BIBREF9 . The attention weights in our decoder stack are the intermediate outputs in multi-head attentions and are not suitable for the copy mechanism. Therefore, our model also uses additive attentions for the question and multiple passages on top of the decoder stack. The layer takes $s_t$ as the query and outputs $\alpha ^q_t \in \mathbb {R}^J$ ( $\alpha ^p_t \in \mathbb {R}^{KL}$ ) as the attention weights and $c^q_t \in \mathbb {R}^d$ ( $c^p_t \in \mathbb {R}^d$ ) as the context vectors for the question (passages): $$e^q_j &= {w^q}^\top \tanh (W^{qm} M_j^q + W^{qs} s_t +b^q), \\ \alpha ^q_t &= \mathrm {softmax}(e^q), \\ c^q_t &= \textstyle \sum _j \alpha ^q_{tj} M_j^q, \\ e^{p_k}_l &= {w^p}^\top \tanh (W^{pm} M_l^{p_k} + W^{ps} s_t +b^p), \\ \alpha ^p_t &= \mathrm {softmax}([e^{p_1}; \ldots ; e^{p_K}]), \\ c^p_t &= \textstyle \sum _{l} \alpha ^p_{tl} M^{p_\mathrm {all}}_{l},$$ (Eq. 33) where $w^q$ , $w^p \in \mathbb {R}^d$ , $W^{qm}$ , $W^{qs}$ , $W^{pm}$ , $W^{ps} \in \mathbb {R}^{d \times d}$ , and $b^q$ , $b^p \in \mathbb {R}^d$ are learnable parameters. $P^q$ and $P^p$ are the copy distributions over the extended vocabulary, defined as: $$P^q(y_t) &= \textstyle \sum _{j: x^q_j = y_t} \alpha ^q_{tj}, \\ P^p(y_t) &= \textstyle \sum _{l: x^{p_{k(l)}}_{l} = y_t} \alpha ^p_{tl},$$ (Eq. 34) where $k(l)$ means the passage index corresponding to the $l$ -th word in the concatenated passages. The final distribution of the $t$ -th answer word, $y_t$ , is defined as a mixture of the three distributions: $$P(y_t) = \lambda ^v P^v(y_t) + \lambda ^q P^q(y_t) + \lambda ^p P^p(y_t),$$ (Eq. 36) where the mixture weights are given by $$\lambda ^v, \lambda ^q, \lambda ^p = \mathrm {softmax}(W^m [s_t; c^q_t; c^p_t] + b^m).$$ (Eq. 37) $W^m \in \mathbb {R}^{3 \times 3d}$ , $b^m \in \mathbb {R}^3$ are learnable parameters. In order not to use words in irrelevant passages, our model introduces the concept of combined attention BIBREF26 . While the original technique combines the word and sentence level attentions, our model combines the passage-level relevance $\beta ^{p_k}$ and word-level attentions $\alpha ^p_t$ by using simple scalar multiplication and re-normalization. The updated word attention is: $$\alpha ^p_{tl} & := \frac{\alpha ^p_{tl} \beta ^{p_{k(l)} }}{\sum _{l^{\prime }} \alpha ^p_{tl^{\prime }} \beta ^{p_{k(l^{\prime })}}}.$$ (Eq. 39) Loss Function We define the training loss as the sum of losses in $$L(\theta ) = L_\mathrm {dec} + \gamma _\mathrm {rank} L_\mathrm {rank} + \gamma _\mathrm {cls} L_\mathrm {cls}$$ (Eq. 41) where $\theta $ is the set of all learnable parameters, and $\gamma _\mathrm {rank}$ and $\gamma _\mathrm {cls}$ are balancing parameters. The loss of the decoder, $L_\mathrm {dec}$ , is the negative log likelihood of the whole target answer sentence averaged over $N_\mathrm {able}$ answerable examples: $$L_\mathrm {dec} = - \frac{1}{N_\mathrm {able}}\sum _{(a,y)\in \mathcal {D}} \frac{a}{T} \sum _t \log P(y_{t}),$$ (Eq. 42) where $\mathcal {D}$ is the training dataset. The losses of the passage ranker, $L_\mathrm {rank}$ , and the answer possibility classifier, $L_\mathrm {cls}$ , are the binary cross entropy between the true and predicted values averaged over all $N$ examples: $$L_\mathrm {rank} = - \frac{1}{NK} \sum _k \sum _{r^{p_k}\in \mathcal {D}} \biggl ( \begin{split} &r^{p_k} \log \beta ^{p_k} + \\ &(1-r^{p_k}) \log (1-\beta ^{p_k}) \end{split} \biggr ),\\ L_\mathrm {cls} = - \frac{1}{N} \sum _{a \in \mathcal {D}} \biggl ( \begin{split} &a \log P(a) + \\ &(1-a) \log (1-P(a)) \end{split} \biggr ).$$ (Eq. 43) Setup We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\subset $ ANS $\subset $ ALL. We trained our model on a machine with eight NVIDIA P100 GPUs. Our model was jointly trained with the two answer styles in the ALL set for a total of eight epochs with a batch size of 80. The training took roughly six days. The ensemble model consists of six training runs with the identical architecture and hyperparameters. The hidden size $d$ was 304, and the number of attention heads was 8. The inner state size of the feed-forward networks was 256. The numbers of shared encoding blocks, modeling blocks for question, modeling blocks for passages, and decoder blocks were 3, 2, 5, and 8, respectively. We used the pre-trained uncased 300-dimensional GloVe BIBREF15 and the original 512-dimensional ELMo BIBREF16 . We used the spaCy tokenizer, and all words were lowercased except the input for ELMo. The number of common words in $V_\mathrm {ext}$ was 5,000. We used the Adam optimization BIBREF27 with $\beta _1 = 0.9$ , $\beta _2 = 0.999$ , and $\epsilon = 10^{-8}$ . Weights were initialized using $N(0, 0.02)$ , except that the biases of all the linear transformations were initialized with zero vectors. The learning rate was increased linearly from zero to $2.5 \times 10^{-4}$ in the first 2,000 steps and annealed to 0 using a cosine schedule. All parameter gradients were clipped to a maximum norm of 1. An exponential moving average was applied to all trainable variables with a decay rate 0.9995. The balancing factors of joint learning, $\lambda _\mathrm {rank}$ and $\lambda _\mathrm {cls}$ , were set to 0.5 and 0.1. We used a modified version of the L $_2$ regularization proposed in BIBREF28 , with $w = 0.01$ . We additionally used a dropout BIBREF29 rate of 0.3 for all highway networks and residual and scaled dot-product attention operations in the multi-head attention mechanism. We also used one-sided label smoothing BIBREF30 for the passage relevance and answer possibility labels. We smoothed only the positive labels to 0.9. Results Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 . Table 3 shows the results of the ablation test for our model (controlled with the NLG style) on the well-formed answers of the WFA dev. set. Our model, which was trained with the ALL set consisting of the two styles, outperformed the model trained with the WFA set consisting of the single style. Multi-style learning allowed our model to improve NLG performance by also using non-sentence answers. Table 3 shows that our model outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN BIBREF11 . Our deep Transformer decoder captured the interaction among the question, the passages, and the answer better than a single-layer LSTM decoder. Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. The joint learning has a regularization effect on the question-passages reader. We also confirmed that the gold passage ranker, which can predict passage relevances perfectly, improves RC performance significantly. Passage re-ranking will be a key to developing a system that can outperform humans. Table 4 shows the passage re-ranking performance for the ten given passages on the ANS dev. set. Our ranker improved the initial ranking provided by Bing by a significant margin. Also, the ranker shares the question-passages reader with the answer decoder, and this sharing contributed to the improvements over the ranker trained without the answer decoder. This result is similar to those reported in BIBREF33 . Moreover, the joint learning with the answer possibility classifier and multiple answer styles, which enables our model to learn from a larger number of data, improved the re-ranking. Figure 4 shows the precision-recall curve of answer possibility classification on the ALL dev. set, where the positive class is the answerable data. Our model identified the answerable questions well. The maximum $F_1$ score was 0.7893. This is the first report on answer possibility classification with MS MARCO 2.1. Figure 5 shows the lengths of the answers generated by our model, which are broken down by answer style and query type. The generated answers were relatively shorter than the reference answers but well controlled with the target style in every query type. Also, we should note that our model does not guarantee the consistency in terms of meaning across the answer styles. We randomly selected 100 questions and compared the answers our model generated with the NLG and Q&A styles. The consistency ratio was 0.81, where major errors were due to copying words from different parts of the passages and generating different words, especially yes/no, from a fixed vocabulary. Appendix "Reading Comprehension Examples generated by Masque from MS MARCO 2.1" shows examples of generated answers. We found (d) style errors; (e) yes/no classification errors; (f) copy errors with respect to numerical values; and (c,e) grammatical errors that were originally contained in the inputs. Conclusion We believe our study makes two contributions to the study of multi-passage RC with NLG. Our model enables 1) multi-source abstractive summarization based RC and 2) style-controllable RC. The key strength of our model is its high accuracy of generating abstractive summaries from the question and passages; our model achieved state-of-the-art performance in terms of Rouge-L on the Q&A and NLG tasks of MS MARCO 2.1 that have different answer styles BIBREF5 . The styles considered in this paper are only related to the context of the question in the answer sentence; our model will be promising for controlling other styles such as length and speaking styles. Future work will involve exploring the potential of hybrid models combining extractive and abstractive approaches and improving the passage re-ranking and answerable question identification.
BiDAF, Deep Cascade QA, S-Net+CES2S, BERT+Multi-PGNet, Selector+CCG, VNET, DECAPROP, MHPGM+NOIC, ConZNet, RMR+A2D
e63bde5c7b154fbe990c3185e2626d13a1bad171
e63bde5c7b154fbe990c3185e2626d13a1bad171_0
Q: What is the performance achieved on NarrativeQA? Text: Introduction Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 . The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Siri and Alexa. However, in comparison with answer span extraction, the natural language generation (NLG) ability for RC has been less studied. While datasets such as MS MARCO BIBREF5 have been proposed for providing abstractive answers in natural language, the state-of-the-art methods BIBREF6 , BIBREF7 are based on answer span extraction, even for the datasets. Generative models such as S-Net BIBREF8 suffer from a dearth of training data to cover open-domain questions. Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as concise phrases that do not contain the context of the question and well-formed sentences that make sense even without the context of the question. These capabilities complement each other; however, the methods used in previous studies cannot utilize and control different answer styles within a model. In this study, we propose a generative model, called Masque, for multi-passage RC. On the MS MARCO 2.1 dataset, Masque achieves state-of-the-art performance on the dataset's two tasks, Q&A and NLG, with different answer styles. The main contributions of this study are that our model enables the following two abilities. Problem Formulation The task considered in this paper, is defined as: Problem 1 Given a question with $J$ words $x^q = \lbrace x^q_1, \ldots , x^q_J\rbrace $ , a set of $K$ passages, where each $k$ -th passage is composed of $L$ words $x^{p_k} = \lbrace x^{p_k}_1, \ldots , x^{p_k}_{L}\rbrace $ , and an answer style $s$ , an RC system outputs an answer $y = \lbrace y_1, \ldots , y_T \rbrace $ conditioned on the style. In short, for inference, given a set of 3-tuples $(x^q, \lbrace x^{p_k}\rbrace , s)$ , the system predicts $P(y)$ . The training data is a set of 6-tuples: $(x^q, \lbrace x^{p_k}\rbrace , s, y, a, \lbrace r^{p_k}\rbrace )$ , where $a$ is 1 if the question is answerable with the provided passages and 0 otherwise, and $r^{p_k}$ is 1 if the $k$ -th passage is required to formulate the answer and 0 otherwise. Proposed Model Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style. Masque directly models the conditional probability $p(y|x^q, \lbrace x^{p_k}\rbrace , s)$ . In addition to multi-style learning, it considers passage ranking and answer possibility classification together as multi-task learning in order to improve accuracy. Figure 2 shows the model architecture. It consists of the following modules. 1 The question-passages reader (§ "Question-Passages Reader" ) models interactions between the question and passages. 2 The passage ranker (§ "Passage Ranker" ) finds relevant passages to the question. 3 The answer possibility classifier (§ "Answer Possibility Classifier" ) identifies answerable questions. 4 The answer sentence decoder (§ "Answer Sentence Decoder" ) outputs a sequence of words conditioned on the style. Question-Passages Reader Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured. Let $x^q$ and $x^{p_k}$ represent one-hot vectors of words in the question and $k$ -th passage. First, this layer projects each of the one-hot vectors (of size $V$ ) into a $d_\mathrm {word}$ -dimensional continuous vector space with a pre-trained weight matrix $W^e \in \mathbb {R}^{d_\mathrm {word} \times V}$ such as GloVe BIBREF15 . Next, it uses contextualized word representations, ELMo BIBREF16 , which is a character-level two-layer bidirectional language model pre-trained on a large-scale corpus. ELMo representations allow our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and contextualized embedding vectors is passed to a two-layer highway network BIBREF17 that is shared for the question and passages. This layer uses a stack of Transformer blocks, which are shared for the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a $d$ -dimensional vector by a linear transformation. The outputs of this layer are sequences of $d$ -dimensional vectors: $E^{p_k} \in \mathbb {R}^{d \times L}$ for the $k$ -th passage and $E^q \in \mathbb {R}^{d \times J}$ for the question. It consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism defined in BIBREF12 . The feed-forward network consists of two linear transformations with a GELU BIBREF18 activation in between, following OpenAI GPT BIBREF19 . Each sub-layer is placed inside a residual block BIBREF20 . For an input $x$ and a given sub-layer function $f$ , the output is $\mathrm {LayerNorm}(f(x)+x)$ , where $\mathrm {LayerNorm}$ indicates the layer normalization proposed in BIBREF21 . To facilitate these residual connections, all sub-layers produce outputs of dimension $d$ . Note that our model does not use any position embeddings because ELMo gives the positional information of the words in each sequence. This layer fuses information from the passages to the question as well as from the question to the passages in a dual mechanism. It first computes a similarity matrix $U^{p_k} \in \mathbb {R}^{L{\times }J}$ between the question and $k$ -th passage, as is done in BIBREF22 , where $$U^{p_k}_{lj} = {w^a}^\top [ E^{p_k}_l; E^q_j; E^{p_k}_l \odot E^q_j ]$$ (Eq. 15) indicates the similarity between the $l$ -th word of the $k$ -th passage and the $j$ -th question word. $w^a \in \mathbb {R}^{3d}$ are learnable parameters. The $\odot $ operator denotes the Hadamard product, and the $[;]$ operator means vector concatenation across the rows. Next, it obtains the row and column normalized similarity matrices $A^{p_k} = \mathrm {softmax}_j({U^{p_k}}^\top ) \in \mathbb {R}^{J\times L}$ and $B^{p_k} = \mathrm {softmax}_{l}(U^{p_k}) \in \mathbb {R}^{L \times J}$ . We use DCN BIBREF23 as the dual attention mechanism to obtain question-to-passage representations $G^{q \rightarrow p_k} \in \mathbb {R}^{5d \times L}$ : $$\nonumber [E^{p_k}; \bar{A}^{p_k}; \bar{\bar{A}}^{p_k}; E^{p_k} \odot \bar{A}^{p_k}; E^{p_k} \odot \bar{\bar{A}}^{p_k}]$$ (Eq. 16) and passage-to-question ones $G^{p \rightarrow q} \in \mathbb {R}^{5d \times J}$ : $$\begin{split} \nonumber & [ E^{q} ; \max _k(\bar{B}^{p_k}); \max _k(\bar{\bar{B}}^{p_k}); \\ &\hspace{10.0pt} E^{q} \odot \max _k(\bar{B}^{p_k}); E^{q} \odot \max _k(\bar{\bar{B}}^{p_k}) ] \mathrm {\ \ where} \end{split}\\ \nonumber &\bar{A}^{p_k} = E^q A^{p_k}\in \mathbb {R}^{d \times L}, \ \bar{B}^{p_k} = E^{p_k} B^{p_k} \in \mathbb {R}^{d \times J} \\ \nonumber &\bar{\bar{A}}^{p_k} = \bar{B}^{p_k} A^{p_k} \in \mathbb {R}^{d \times L}, \ \bar{\bar{B}}^{p_k} = \bar{A}^{p_k} B^{p_k} \in \mathbb {R}^{d \times J}.$$ (Eq. 17) This layer uses a stack of Transformer encoder blocks for question representations and obtains $M^q \in \mathbb {R}^{d \times J}$ from $G^{p \rightarrow q}$ . It also uses an another stack for passage representations and obtains $M^{p_k} \in \mathbb {R}^{d \times L}$ from $G^{q \rightarrow p_k}$ for each $k$ -th passage. The outputs of this layer, $M^q$ and $\lbrace M^{p_k}\rbrace $ , are passed on to the answer sentence decoder; the $\lbrace M^{p_k}\rbrace $ are also passed on to the passage ranker and answer possibility classifier. Passage Ranker The passage ranker maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: $$\beta ^{p_k} = \mathrm {sigmoid}({w^r}^\top M^{p_k}_1),$$ (Eq. 20) where $w^r \in \mathbb {R}^{d}$ are learnable parameters. Answer Possibility Classifier The answer possibility classifier maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: $$P(a) = \mathrm {sigmoid}({w^c}^\top [M^{p_1}_1; \ldots ; M^{p_K}_1]),$$ (Eq. 22) where $w^c \in \mathbb {R}^{Kd}$ are learnable parameters. Answer Sentence Decoder Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step. Let $y = \lbrace y_1, \ldots , y_{T}\rbrace $ represent one-hot vectors of words in the answer. This layer has the same components as the word embedding layer of the question-passages reader, except that it uses a unidirectional ELMo in order to ensure that the predictions for position $t$ depend only on the known outputs at positions less than $t$ . Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style. This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a $d$ -dimensional vector by a linear transformation, and the output of this layer is a sequence of $d$ -dimensional vectors: $\lbrace s_1, \ldots , s_T\rbrace $ . In addition to the encoder block, this block consists of second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2 . As in BIBREF12 , the self-attention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over $M^q$ and $M^{p_\mathrm {all}}$ , respectively. The $M^{p_\mathrm {all}}$ is the concatenated outputs of the encoder stack for the passages, $$M^{p_\mathrm {all}} = [M^{p_1}, \ldots , M^{p_K}] \in \mathbb {R}^{d \times KL}.$$ (Eq. 27) The $[,]$ operator means vector concatenation across the columns. This attention for the concatenated passages enables our model to produce attention weights that are comparable between passages. Our extended mechanism allows both words to be generated from a fixed vocabulary and words to be copied from both the question and multiple passages. Figure 3 shows the overview. Let the extended vocabulary, $V_\mathrm {ext}$ , be the union of the common words (a small subset of the full vocabulary, $V$ , defined by the reader-side word embedding matrix) and all words appearing in the input question and passages. $P^v$ denotes the probability distribution of the $t$ -th answer word, $y_t$ , over the extended vocabulary. It is defined as: $$P^v(y_t) =\mathrm {softmax}({W^2}^\top (W^1 s_t + b^1)),$$ (Eq. 31) where the output embedding $W^2 \in \mathbb {R}^{d_\mathrm {word} \times V_\mathrm {ext}}$ is tied with the corresponding part of the input embedding BIBREF25 , and $W^1 \in \mathbb {R}^{d_\mathrm {word} \times d}$ and $b^1 \in \mathbb {R}^{d_\mathrm {word}}$ are learnable parameters. $P^v(y_t)$ is zero if $y_t$ is an out-of-vocabulary word for $V$ . The copy mechanism used in the original pointer-generator is based on the attention weights of a single-layer attentional RNN decoder BIBREF9 . The attention weights in our decoder stack are the intermediate outputs in multi-head attentions and are not suitable for the copy mechanism. Therefore, our model also uses additive attentions for the question and multiple passages on top of the decoder stack. The layer takes $s_t$ as the query and outputs $\alpha ^q_t \in \mathbb {R}^J$ ( $\alpha ^p_t \in \mathbb {R}^{KL}$ ) as the attention weights and $c^q_t \in \mathbb {R}^d$ ( $c^p_t \in \mathbb {R}^d$ ) as the context vectors for the question (passages): $$e^q_j &= {w^q}^\top \tanh (W^{qm} M_j^q + W^{qs} s_t +b^q), \\ \alpha ^q_t &= \mathrm {softmax}(e^q), \\ c^q_t &= \textstyle \sum _j \alpha ^q_{tj} M_j^q, \\ e^{p_k}_l &= {w^p}^\top \tanh (W^{pm} M_l^{p_k} + W^{ps} s_t +b^p), \\ \alpha ^p_t &= \mathrm {softmax}([e^{p_1}; \ldots ; e^{p_K}]), \\ c^p_t &= \textstyle \sum _{l} \alpha ^p_{tl} M^{p_\mathrm {all}}_{l},$$ (Eq. 33) where $w^q$ , $w^p \in \mathbb {R}^d$ , $W^{qm}$ , $W^{qs}$ , $W^{pm}$ , $W^{ps} \in \mathbb {R}^{d \times d}$ , and $b^q$ , $b^p \in \mathbb {R}^d$ are learnable parameters. $P^q$ and $P^p$ are the copy distributions over the extended vocabulary, defined as: $$P^q(y_t) &= \textstyle \sum _{j: x^q_j = y_t} \alpha ^q_{tj}, \\ P^p(y_t) &= \textstyle \sum _{l: x^{p_{k(l)}}_{l} = y_t} \alpha ^p_{tl},$$ (Eq. 34) where $k(l)$ means the passage index corresponding to the $l$ -th word in the concatenated passages. The final distribution of the $t$ -th answer word, $y_t$ , is defined as a mixture of the three distributions: $$P(y_t) = \lambda ^v P^v(y_t) + \lambda ^q P^q(y_t) + \lambda ^p P^p(y_t),$$ (Eq. 36) where the mixture weights are given by $$\lambda ^v, \lambda ^q, \lambda ^p = \mathrm {softmax}(W^m [s_t; c^q_t; c^p_t] + b^m).$$ (Eq. 37) $W^m \in \mathbb {R}^{3 \times 3d}$ , $b^m \in \mathbb {R}^3$ are learnable parameters. In order not to use words in irrelevant passages, our model introduces the concept of combined attention BIBREF26 . While the original technique combines the word and sentence level attentions, our model combines the passage-level relevance $\beta ^{p_k}$ and word-level attentions $\alpha ^p_t$ by using simple scalar multiplication and re-normalization. The updated word attention is: $$\alpha ^p_{tl} & := \frac{\alpha ^p_{tl} \beta ^{p_{k(l)} }}{\sum _{l^{\prime }} \alpha ^p_{tl^{\prime }} \beta ^{p_{k(l^{\prime })}}}.$$ (Eq. 39) Loss Function We define the training loss as the sum of losses in $$L(\theta ) = L_\mathrm {dec} + \gamma _\mathrm {rank} L_\mathrm {rank} + \gamma _\mathrm {cls} L_\mathrm {cls}$$ (Eq. 41) where $\theta $ is the set of all learnable parameters, and $\gamma _\mathrm {rank}$ and $\gamma _\mathrm {cls}$ are balancing parameters. The loss of the decoder, $L_\mathrm {dec}$ , is the negative log likelihood of the whole target answer sentence averaged over $N_\mathrm {able}$ answerable examples: $$L_\mathrm {dec} = - \frac{1}{N_\mathrm {able}}\sum _{(a,y)\in \mathcal {D}} \frac{a}{T} \sum _t \log P(y_{t}),$$ (Eq. 42) where $\mathcal {D}$ is the training dataset. The losses of the passage ranker, $L_\mathrm {rank}$ , and the answer possibility classifier, $L_\mathrm {cls}$ , are the binary cross entropy between the true and predicted values averaged over all $N$ examples: $$L_\mathrm {rank} = - \frac{1}{NK} \sum _k \sum _{r^{p_k}\in \mathcal {D}} \biggl ( \begin{split} &r^{p_k} \log \beta ^{p_k} + \\ &(1-r^{p_k}) \log (1-\beta ^{p_k}) \end{split} \biggr ),\\ L_\mathrm {cls} = - \frac{1}{N} \sum _{a \in \mathcal {D}} \biggl ( \begin{split} &a \log P(a) + \\ &(1-a) \log (1-P(a)) \end{split} \biggr ).$$ (Eq. 43) Setup We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\subset $ ANS $\subset $ ALL. We trained our model on a machine with eight NVIDIA P100 GPUs. Our model was jointly trained with the two answer styles in the ALL set for a total of eight epochs with a batch size of 80. The training took roughly six days. The ensemble model consists of six training runs with the identical architecture and hyperparameters. The hidden size $d$ was 304, and the number of attention heads was 8. The inner state size of the feed-forward networks was 256. The numbers of shared encoding blocks, modeling blocks for question, modeling blocks for passages, and decoder blocks were 3, 2, 5, and 8, respectively. We used the pre-trained uncased 300-dimensional GloVe BIBREF15 and the original 512-dimensional ELMo BIBREF16 . We used the spaCy tokenizer, and all words were lowercased except the input for ELMo. The number of common words in $V_\mathrm {ext}$ was 5,000. We used the Adam optimization BIBREF27 with $\beta _1 = 0.9$ , $\beta _2 = 0.999$ , and $\epsilon = 10^{-8}$ . Weights were initialized using $N(0, 0.02)$ , except that the biases of all the linear transformations were initialized with zero vectors. The learning rate was increased linearly from zero to $2.5 \times 10^{-4}$ in the first 2,000 steps and annealed to 0 using a cosine schedule. All parameter gradients were clipped to a maximum norm of 1. An exponential moving average was applied to all trainable variables with a decay rate 0.9995. The balancing factors of joint learning, $\lambda _\mathrm {rank}$ and $\lambda _\mathrm {cls}$ , were set to 0.5 and 0.1. We used a modified version of the L $_2$ regularization proposed in BIBREF28 , with $w = 0.01$ . We additionally used a dropout BIBREF29 rate of 0.3 for all highway networks and residual and scaled dot-product attention operations in the multi-head attention mechanism. We also used one-sided label smoothing BIBREF30 for the passage relevance and answer possibility labels. We smoothed only the positive labels to 0.9. Results Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 . Table 3 shows the results of the ablation test for our model (controlled with the NLG style) on the well-formed answers of the WFA dev. set. Our model, which was trained with the ALL set consisting of the two styles, outperformed the model trained with the WFA set consisting of the single style. Multi-style learning allowed our model to improve NLG performance by also using non-sentence answers. Table 3 shows that our model outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN BIBREF11 . Our deep Transformer decoder captured the interaction among the question, the passages, and the answer better than a single-layer LSTM decoder. Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. The joint learning has a regularization effect on the question-passages reader. We also confirmed that the gold passage ranker, which can predict passage relevances perfectly, improves RC performance significantly. Passage re-ranking will be a key to developing a system that can outperform humans. Table 4 shows the passage re-ranking performance for the ten given passages on the ANS dev. set. Our ranker improved the initial ranking provided by Bing by a significant margin. Also, the ranker shares the question-passages reader with the answer decoder, and this sharing contributed to the improvements over the ranker trained without the answer decoder. This result is similar to those reported in BIBREF33 . Moreover, the joint learning with the answer possibility classifier and multiple answer styles, which enables our model to learn from a larger number of data, improved the re-ranking. Figure 4 shows the precision-recall curve of answer possibility classification on the ALL dev. set, where the positive class is the answerable data. Our model identified the answerable questions well. The maximum $F_1$ score was 0.7893. This is the first report on answer possibility classification with MS MARCO 2.1. Figure 5 shows the lengths of the answers generated by our model, which are broken down by answer style and query type. The generated answers were relatively shorter than the reference answers but well controlled with the target style in every query type. Also, we should note that our model does not guarantee the consistency in terms of meaning across the answer styles. We randomly selected 100 questions and compared the answers our model generated with the NLG and Q&A styles. The consistency ratio was 0.81, where major errors were due to copying words from different parts of the passages and generating different words, especially yes/no, from a fixed vocabulary. Appendix "Reading Comprehension Examples generated by Masque from MS MARCO 2.1" shows examples of generated answers. We found (d) style errors; (e) yes/no classification errors; (f) copy errors with respect to numerical values; and (c,e) grammatical errors that were originally contained in the inputs. Conclusion We believe our study makes two contributions to the study of multi-passage RC with NLG. Our model enables 1) multi-source abstractive summarization based RC and 2) style-controllable RC. The key strength of our model is its high accuracy of generating abstractive summaries from the question and passages; our model achieved state-of-the-art performance in terms of Rouge-L on the Q&A and NLG tasks of MS MARCO 2.1 that have different answer styles BIBREF5 . The styles considered in this paper are only related to the context of the question in the answer sentence; our model will be promising for controlling other styles such as length and speaking styles. Future work will involve exploring the potential of hybrid models combining extractive and abstractive approaches and improving the passage re-ranking and answerable question identification.
Bleu-1: 54.11, Bleu-4: 30.43, METEOR: 26.13, ROUGE-L: 59.87
cb8a6f5c29715619a137e21b54b29e9dd48dad7d
cb8a6f5c29715619a137e21b54b29e9dd48dad7d_0
Q: What is an "answer style"? Text: Introduction Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 . The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Siri and Alexa. However, in comparison with answer span extraction, the natural language generation (NLG) ability for RC has been less studied. While datasets such as MS MARCO BIBREF5 have been proposed for providing abstractive answers in natural language, the state-of-the-art methods BIBREF6 , BIBREF7 are based on answer span extraction, even for the datasets. Generative models such as S-Net BIBREF8 suffer from a dearth of training data to cover open-domain questions. Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as concise phrases that do not contain the context of the question and well-formed sentences that make sense even without the context of the question. These capabilities complement each other; however, the methods used in previous studies cannot utilize and control different answer styles within a model. In this study, we propose a generative model, called Masque, for multi-passage RC. On the MS MARCO 2.1 dataset, Masque achieves state-of-the-art performance on the dataset's two tasks, Q&A and NLG, with different answer styles. The main contributions of this study are that our model enables the following two abilities. Problem Formulation The task considered in this paper, is defined as: Problem 1 Given a question with $J$ words $x^q = \lbrace x^q_1, \ldots , x^q_J\rbrace $ , a set of $K$ passages, where each $k$ -th passage is composed of $L$ words $x^{p_k} = \lbrace x^{p_k}_1, \ldots , x^{p_k}_{L}\rbrace $ , and an answer style $s$ , an RC system outputs an answer $y = \lbrace y_1, \ldots , y_T \rbrace $ conditioned on the style. In short, for inference, given a set of 3-tuples $(x^q, \lbrace x^{p_k}\rbrace , s)$ , the system predicts $P(y)$ . The training data is a set of 6-tuples: $(x^q, \lbrace x^{p_k}\rbrace , s, y, a, \lbrace r^{p_k}\rbrace )$ , where $a$ is 1 if the question is answerable with the provided passages and 0 otherwise, and $r^{p_k}$ is 1 if the $k$ -th passage is required to formulate the answer and 0 otherwise. Proposed Model Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style. Masque directly models the conditional probability $p(y|x^q, \lbrace x^{p_k}\rbrace , s)$ . In addition to multi-style learning, it considers passage ranking and answer possibility classification together as multi-task learning in order to improve accuracy. Figure 2 shows the model architecture. It consists of the following modules. 1 The question-passages reader (§ "Question-Passages Reader" ) models interactions between the question and passages. 2 The passage ranker (§ "Passage Ranker" ) finds relevant passages to the question. 3 The answer possibility classifier (§ "Answer Possibility Classifier" ) identifies answerable questions. 4 The answer sentence decoder (§ "Answer Sentence Decoder" ) outputs a sequence of words conditioned on the style. Question-Passages Reader Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured. Let $x^q$ and $x^{p_k}$ represent one-hot vectors of words in the question and $k$ -th passage. First, this layer projects each of the one-hot vectors (of size $V$ ) into a $d_\mathrm {word}$ -dimensional continuous vector space with a pre-trained weight matrix $W^e \in \mathbb {R}^{d_\mathrm {word} \times V}$ such as GloVe BIBREF15 . Next, it uses contextualized word representations, ELMo BIBREF16 , which is a character-level two-layer bidirectional language model pre-trained on a large-scale corpus. ELMo representations allow our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and contextualized embedding vectors is passed to a two-layer highway network BIBREF17 that is shared for the question and passages. This layer uses a stack of Transformer blocks, which are shared for the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a $d$ -dimensional vector by a linear transformation. The outputs of this layer are sequences of $d$ -dimensional vectors: $E^{p_k} \in \mathbb {R}^{d \times L}$ for the $k$ -th passage and $E^q \in \mathbb {R}^{d \times J}$ for the question. It consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism defined in BIBREF12 . The feed-forward network consists of two linear transformations with a GELU BIBREF18 activation in between, following OpenAI GPT BIBREF19 . Each sub-layer is placed inside a residual block BIBREF20 . For an input $x$ and a given sub-layer function $f$ , the output is $\mathrm {LayerNorm}(f(x)+x)$ , where $\mathrm {LayerNorm}$ indicates the layer normalization proposed in BIBREF21 . To facilitate these residual connections, all sub-layers produce outputs of dimension $d$ . Note that our model does not use any position embeddings because ELMo gives the positional information of the words in each sequence. This layer fuses information from the passages to the question as well as from the question to the passages in a dual mechanism. It first computes a similarity matrix $U^{p_k} \in \mathbb {R}^{L{\times }J}$ between the question and $k$ -th passage, as is done in BIBREF22 , where $$U^{p_k}_{lj} = {w^a}^\top [ E^{p_k}_l; E^q_j; E^{p_k}_l \odot E^q_j ]$$ (Eq. 15) indicates the similarity between the $l$ -th word of the $k$ -th passage and the $j$ -th question word. $w^a \in \mathbb {R}^{3d}$ are learnable parameters. The $\odot $ operator denotes the Hadamard product, and the $[;]$ operator means vector concatenation across the rows. Next, it obtains the row and column normalized similarity matrices $A^{p_k} = \mathrm {softmax}_j({U^{p_k}}^\top ) \in \mathbb {R}^{J\times L}$ and $B^{p_k} = \mathrm {softmax}_{l}(U^{p_k}) \in \mathbb {R}^{L \times J}$ . We use DCN BIBREF23 as the dual attention mechanism to obtain question-to-passage representations $G^{q \rightarrow p_k} \in \mathbb {R}^{5d \times L}$ : $$\nonumber [E^{p_k}; \bar{A}^{p_k}; \bar{\bar{A}}^{p_k}; E^{p_k} \odot \bar{A}^{p_k}; E^{p_k} \odot \bar{\bar{A}}^{p_k}]$$ (Eq. 16) and passage-to-question ones $G^{p \rightarrow q} \in \mathbb {R}^{5d \times J}$ : $$\begin{split} \nonumber & [ E^{q} ; \max _k(\bar{B}^{p_k}); \max _k(\bar{\bar{B}}^{p_k}); \\ &\hspace{10.0pt} E^{q} \odot \max _k(\bar{B}^{p_k}); E^{q} \odot \max _k(\bar{\bar{B}}^{p_k}) ] \mathrm {\ \ where} \end{split}\\ \nonumber &\bar{A}^{p_k} = E^q A^{p_k}\in \mathbb {R}^{d \times L}, \ \bar{B}^{p_k} = E^{p_k} B^{p_k} \in \mathbb {R}^{d \times J} \\ \nonumber &\bar{\bar{A}}^{p_k} = \bar{B}^{p_k} A^{p_k} \in \mathbb {R}^{d \times L}, \ \bar{\bar{B}}^{p_k} = \bar{A}^{p_k} B^{p_k} \in \mathbb {R}^{d \times J}.$$ (Eq. 17) This layer uses a stack of Transformer encoder blocks for question representations and obtains $M^q \in \mathbb {R}^{d \times J}$ from $G^{p \rightarrow q}$ . It also uses an another stack for passage representations and obtains $M^{p_k} \in \mathbb {R}^{d \times L}$ from $G^{q \rightarrow p_k}$ for each $k$ -th passage. The outputs of this layer, $M^q$ and $\lbrace M^{p_k}\rbrace $ , are passed on to the answer sentence decoder; the $\lbrace M^{p_k}\rbrace $ are also passed on to the passage ranker and answer possibility classifier. Passage Ranker The passage ranker maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: $$\beta ^{p_k} = \mathrm {sigmoid}({w^r}^\top M^{p_k}_1),$$ (Eq. 20) where $w^r \in \mathbb {R}^{d}$ are learnable parameters. Answer Possibility Classifier The answer possibility classifier maps the output of the modeling layer, $\lbrace M^{p_k}\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: $$P(a) = \mathrm {sigmoid}({w^c}^\top [M^{p_1}_1; \ldots ; M^{p_K}_1]),$$ (Eq. 22) where $w^c \in \mathbb {R}^{Kd}$ are learnable parameters. Answer Sentence Decoder Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step. Let $y = \lbrace y_1, \ldots , y_{T}\rbrace $ represent one-hot vectors of words in the answer. This layer has the same components as the word embedding layer of the question-passages reader, except that it uses a unidirectional ELMo in order to ensure that the predictions for position $t$ depend only on the known outputs at positions less than $t$ . Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style. This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a $d$ -dimensional vector by a linear transformation, and the output of this layer is a sequence of $d$ -dimensional vectors: $\lbrace s_1, \ldots , s_T\rbrace $ . In addition to the encoder block, this block consists of second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2 . As in BIBREF12 , the self-attention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over $M^q$ and $M^{p_\mathrm {all}}$ , respectively. The $M^{p_\mathrm {all}}$ is the concatenated outputs of the encoder stack for the passages, $$M^{p_\mathrm {all}} = [M^{p_1}, \ldots , M^{p_K}] \in \mathbb {R}^{d \times KL}.$$ (Eq. 27) The $[,]$ operator means vector concatenation across the columns. This attention for the concatenated passages enables our model to produce attention weights that are comparable between passages. Our extended mechanism allows both words to be generated from a fixed vocabulary and words to be copied from both the question and multiple passages. Figure 3 shows the overview. Let the extended vocabulary, $V_\mathrm {ext}$ , be the union of the common words (a small subset of the full vocabulary, $V$ , defined by the reader-side word embedding matrix) and all words appearing in the input question and passages. $P^v$ denotes the probability distribution of the $t$ -th answer word, $y_t$ , over the extended vocabulary. It is defined as: $$P^v(y_t) =\mathrm {softmax}({W^2}^\top (W^1 s_t + b^1)),$$ (Eq. 31) where the output embedding $W^2 \in \mathbb {R}^{d_\mathrm {word} \times V_\mathrm {ext}}$ is tied with the corresponding part of the input embedding BIBREF25 , and $W^1 \in \mathbb {R}^{d_\mathrm {word} \times d}$ and $b^1 \in \mathbb {R}^{d_\mathrm {word}}$ are learnable parameters. $P^v(y_t)$ is zero if $y_t$ is an out-of-vocabulary word for $V$ . The copy mechanism used in the original pointer-generator is based on the attention weights of a single-layer attentional RNN decoder BIBREF9 . The attention weights in our decoder stack are the intermediate outputs in multi-head attentions and are not suitable for the copy mechanism. Therefore, our model also uses additive attentions for the question and multiple passages on top of the decoder stack. The layer takes $s_t$ as the query and outputs $\alpha ^q_t \in \mathbb {R}^J$ ( $\alpha ^p_t \in \mathbb {R}^{KL}$ ) as the attention weights and $c^q_t \in \mathbb {R}^d$ ( $c^p_t \in \mathbb {R}^d$ ) as the context vectors for the question (passages): $$e^q_j &= {w^q}^\top \tanh (W^{qm} M_j^q + W^{qs} s_t +b^q), \\ \alpha ^q_t &= \mathrm {softmax}(e^q), \\ c^q_t &= \textstyle \sum _j \alpha ^q_{tj} M_j^q, \\ e^{p_k}_l &= {w^p}^\top \tanh (W^{pm} M_l^{p_k} + W^{ps} s_t +b^p), \\ \alpha ^p_t &= \mathrm {softmax}([e^{p_1}; \ldots ; e^{p_K}]), \\ c^p_t &= \textstyle \sum _{l} \alpha ^p_{tl} M^{p_\mathrm {all}}_{l},$$ (Eq. 33) where $w^q$ , $w^p \in \mathbb {R}^d$ , $W^{qm}$ , $W^{qs}$ , $W^{pm}$ , $W^{ps} \in \mathbb {R}^{d \times d}$ , and $b^q$ , $b^p \in \mathbb {R}^d$ are learnable parameters. $P^q$ and $P^p$ are the copy distributions over the extended vocabulary, defined as: $$P^q(y_t) &= \textstyle \sum _{j: x^q_j = y_t} \alpha ^q_{tj}, \\ P^p(y_t) &= \textstyle \sum _{l: x^{p_{k(l)}}_{l} = y_t} \alpha ^p_{tl},$$ (Eq. 34) where $k(l)$ means the passage index corresponding to the $l$ -th word in the concatenated passages. The final distribution of the $t$ -th answer word, $y_t$ , is defined as a mixture of the three distributions: $$P(y_t) = \lambda ^v P^v(y_t) + \lambda ^q P^q(y_t) + \lambda ^p P^p(y_t),$$ (Eq. 36) where the mixture weights are given by $$\lambda ^v, \lambda ^q, \lambda ^p = \mathrm {softmax}(W^m [s_t; c^q_t; c^p_t] + b^m).$$ (Eq. 37) $W^m \in \mathbb {R}^{3 \times 3d}$ , $b^m \in \mathbb {R}^3$ are learnable parameters. In order not to use words in irrelevant passages, our model introduces the concept of combined attention BIBREF26 . While the original technique combines the word and sentence level attentions, our model combines the passage-level relevance $\beta ^{p_k}$ and word-level attentions $\alpha ^p_t$ by using simple scalar multiplication and re-normalization. The updated word attention is: $$\alpha ^p_{tl} & := \frac{\alpha ^p_{tl} \beta ^{p_{k(l)} }}{\sum _{l^{\prime }} \alpha ^p_{tl^{\prime }} \beta ^{p_{k(l^{\prime })}}}.$$ (Eq. 39) Loss Function We define the training loss as the sum of losses in $$L(\theta ) = L_\mathrm {dec} + \gamma _\mathrm {rank} L_\mathrm {rank} + \gamma _\mathrm {cls} L_\mathrm {cls}$$ (Eq. 41) where $\theta $ is the set of all learnable parameters, and $\gamma _\mathrm {rank}$ and $\gamma _\mathrm {cls}$ are balancing parameters. The loss of the decoder, $L_\mathrm {dec}$ , is the negative log likelihood of the whole target answer sentence averaged over $N_\mathrm {able}$ answerable examples: $$L_\mathrm {dec} = - \frac{1}{N_\mathrm {able}}\sum _{(a,y)\in \mathcal {D}} \frac{a}{T} \sum _t \log P(y_{t}),$$ (Eq. 42) where $\mathcal {D}$ is the training dataset. The losses of the passage ranker, $L_\mathrm {rank}$ , and the answer possibility classifier, $L_\mathrm {cls}$ , are the binary cross entropy between the true and predicted values averaged over all $N$ examples: $$L_\mathrm {rank} = - \frac{1}{NK} \sum _k \sum _{r^{p_k}\in \mathcal {D}} \biggl ( \begin{split} &r^{p_k} \log \beta ^{p_k} + \\ &(1-r^{p_k}) \log (1-\beta ^{p_k}) \end{split} \biggr ),\\ L_\mathrm {cls} = - \frac{1}{N} \sum _{a \in \mathcal {D}} \biggl ( \begin{split} &a \log P(a) + \\ &(1-a) \log (1-P(a)) \end{split} \biggr ).$$ (Eq. 43) Setup We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\subset $ ANS $\subset $ ALL. We trained our model on a machine with eight NVIDIA P100 GPUs. Our model was jointly trained with the two answer styles in the ALL set for a total of eight epochs with a batch size of 80. The training took roughly six days. The ensemble model consists of six training runs with the identical architecture and hyperparameters. The hidden size $d$ was 304, and the number of attention heads was 8. The inner state size of the feed-forward networks was 256. The numbers of shared encoding blocks, modeling blocks for question, modeling blocks for passages, and decoder blocks were 3, 2, 5, and 8, respectively. We used the pre-trained uncased 300-dimensional GloVe BIBREF15 and the original 512-dimensional ELMo BIBREF16 . We used the spaCy tokenizer, and all words were lowercased except the input for ELMo. The number of common words in $V_\mathrm {ext}$ was 5,000. We used the Adam optimization BIBREF27 with $\beta _1 = 0.9$ , $\beta _2 = 0.999$ , and $\epsilon = 10^{-8}$ . Weights were initialized using $N(0, 0.02)$ , except that the biases of all the linear transformations were initialized with zero vectors. The learning rate was increased linearly from zero to $2.5 \times 10^{-4}$ in the first 2,000 steps and annealed to 0 using a cosine schedule. All parameter gradients were clipped to a maximum norm of 1. An exponential moving average was applied to all trainable variables with a decay rate 0.9995. The balancing factors of joint learning, $\lambda _\mathrm {rank}$ and $\lambda _\mathrm {cls}$ , were set to 0.5 and 0.1. We used a modified version of the L $_2$ regularization proposed in BIBREF28 , with $w = 0.01$ . We additionally used a dropout BIBREF29 rate of 0.3 for all highway networks and residual and scaled dot-product attention operations in the multi-head attention mechanism. We also used one-sided label smoothing BIBREF30 for the passage relevance and answer possibility labels. We smoothed only the positive labels to 0.9. Results Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 . Table 3 shows the results of the ablation test for our model (controlled with the NLG style) on the well-formed answers of the WFA dev. set. Our model, which was trained with the ALL set consisting of the two styles, outperformed the model trained with the WFA set consisting of the single style. Multi-style learning allowed our model to improve NLG performance by also using non-sentence answers. Table 3 shows that our model outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN BIBREF11 . Our deep Transformer decoder captured the interaction among the question, the passages, and the answer better than a single-layer LSTM decoder. Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. The joint learning has a regularization effect on the question-passages reader. We also confirmed that the gold passage ranker, which can predict passage relevances perfectly, improves RC performance significantly. Passage re-ranking will be a key to developing a system that can outperform humans. Table 4 shows the passage re-ranking performance for the ten given passages on the ANS dev. set. Our ranker improved the initial ranking provided by Bing by a significant margin. Also, the ranker shares the question-passages reader with the answer decoder, and this sharing contributed to the improvements over the ranker trained without the answer decoder. This result is similar to those reported in BIBREF33 . Moreover, the joint learning with the answer possibility classifier and multiple answer styles, which enables our model to learn from a larger number of data, improved the re-ranking. Figure 4 shows the precision-recall curve of answer possibility classification on the ALL dev. set, where the positive class is the answerable data. Our model identified the answerable questions well. The maximum $F_1$ score was 0.7893. This is the first report on answer possibility classification with MS MARCO 2.1. Figure 5 shows the lengths of the answers generated by our model, which are broken down by answer style and query type. The generated answers were relatively shorter than the reference answers but well controlled with the target style in every query type. Also, we should note that our model does not guarantee the consistency in terms of meaning across the answer styles. We randomly selected 100 questions and compared the answers our model generated with the NLG and Q&A styles. The consistency ratio was 0.81, where major errors were due to copying words from different parts of the passages and generating different words, especially yes/no, from a fixed vocabulary. Appendix "Reading Comprehension Examples generated by Masque from MS MARCO 2.1" shows examples of generated answers. We found (d) style errors; (e) yes/no classification errors; (f) copy errors with respect to numerical values; and (c,e) grammatical errors that were originally contained in the inputs. Conclusion We believe our study makes two contributions to the study of multi-passage RC with NLG. Our model enables 1) multi-source abstractive summarization based RC and 2) style-controllable RC. The key strength of our model is its high accuracy of generating abstractive summaries from the question and passages; our model achieved state-of-the-art performance in terms of Rouge-L on the Q&A and NLG tasks of MS MARCO 2.1 that have different answer styles BIBREF5 . The styles considered in this paper are only related to the context of the question in the answer sentence; our model will be promising for controlling other styles such as length and speaking styles. Future work will involve exploring the potential of hybrid models combining extractive and abstractive approaches and improving the passage re-ranking and answerable question identification.
well-formed sentences vs concise answers
8a7bd9579d2783bfa81e055a7a6ebc3935da9d20
8a7bd9579d2783bfa81e055a7a6ebc3935da9d20_0
Q: What was the previous state of the art model for this task? Text: Introduction Lip reading, also known as visual speech recognition, aims to predict the sentence being spoken, given a silent video of a talking face. In noisy environments, where speech recognition is difficult, visual speech recognition offers an alternative way to understand speech. Besides, lip reading has practical potential in improved hearing aids, security, and silent dictation in public spaces. Lip reading is essentially a difficult problem, as most lip reading actuations, besides the lips and sometimes tongue and teeth, are latent and ambiguous. Several seemingly identical lip movements can produce different words. Thanks to the recent development of deep learning, English-based lip reading methods have made great progress, at both word-level BIBREF0 , BIBREF1 and sentence-level BIBREF2 , BIBREF3 . However, as the language of the most number of speakers, there is only a little work for Chinese Mandarin lip reading in the multimedia community. Yang et al. BIBREF4 present a naturally-distributed large-scale benchmark for Chinese Mandarin lip-reading in the wild, named LRW-1000, which contains 1,000 classes with 718,018 samples from more than 2,000 individual speakers. Each class corresponds to the syllables of a Mandarin word composed of one or several Chinese characters. However, they perform only word classification for Chinese Mandarin lip reading but not at the complete sentence level. LipCH-Net BIBREF5 is the first paper aiming for sentence-level Chinese Mandarin lip reading. LipCH-Net is a two-step end-to-end architecture, in which two deep neural network models are employed to perform the recognition of Picture-to-Pinyin (mouth motion pictures to pronunciations) and the recognition of Pinyin-to-Hanzi (pronunciations to texts) respectively. Then a joint optimization is performed to improve the overall performance. Belong to two different language families, English and Chinese Mandarin have many differences. The most significant one might be that: Chinese Mandarin is a tone language, while English is not. The tone is the use of pitch in language to distinguish lexical or grammatical meaning - that is, to distinguish or to inflect words . Even two words look the same on the face when pronounced, they can have different tones, thus have different meanings. For example, even though "UTF8gbsn练习" (which means practice) and "UTF8gbsn联系" (which means contact) have different meanings, but they have the same mouth movement. This increases ambiguity when lip reading. So the tone is an important factor for Chinese Mandarin lip reading. Based on the above considerations, in this paper, we present CSSMCM, a sentence-level Chinese Mandarin lip reading network, which contains three sub-networks. Same as BIBREF5 , in the first sub-network, pinyin sequence is predicted from the video. Different from BIBREF5 , which predicts pinyin characters from video, pinyin is taken as a whole in CSSMCM, also known as syllables. As we know, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation. Compared with pinyin characters, syllables are a longer linguistic unit, and can reduce the difficulty of syllable choices in the decoder by sequence-to-sequence attention-based models BIBREF6 . Chen et al. BIBREF7 find that there might be a relationship between the production of lexical tones and the visible movements of the neck, head, and mouth. Motivated by this observation, in the second sub-network, both video and pinyin sequence is used as input to predict tone. Then in the third sub-network, video, pinyin, and tone sequence work together to predict the Chinese character sequence. At last, three sub-networks are jointly finetuned to improve overall performance. As there is no public sentence-level Chinese Mandarin lip reading dataset, we collect a new Chinese Mandarin Lip Reading dataset called CMLR based on China Network Television broadcasts containing talking faces together with subtitles of what is said. In summary, our major contributions are as follows. The Proposed Method In this section, we present CSSMCM, a lip reading model for Chinese Mandarin. As mention in Section SECREF1 , pinyin and tone are both important for Chinese Mandarin lip reading. Pinyin represents how to pronounce a Chinese character and is related to mouth movement. Tone can alleviate the ambiguity of visemes (several speech sounds that look the same) to some extent and can be inferred from visible movements. Based on this, the lip reading task is defined as follow: DISPLAYFORM0 The meaning of these symbols is given in Table TABREF5 . As shown in Equation ( EQREF6 ), the whole problem is divided into three parts, which corresponds to pinyin prediction, tone prediction, and character prediction separately. Each part will be described in detail below. Pinyin Prediction Sub-network The pinyin prediction sub-network transforms video sequence into pinyin sequence, which corresponds to INLINEFORM0 in Equation ( EQREF6 ). This sub-network is based on the sequence-to-sequence architecture with attention mechanism BIBREF8 . We name the encoder and decoder the video encoder and pinyin decoder, for the encoder process video sequence, and the decoder predicts pinyin sequence. The input video sequence is first fed into the VGG model BIBREF9 to extract visual feature. The output of conv5 of VGG is appended with global average pooling BIBREF10 to get the 512-dim feature vector. Then the 512-dim feature vector is fed into video encoder. The video encoder can be denoted as: DISPLAYFORM0 When predicting pinyin sequence, at each timestep INLINEFORM0 , video encoder outputs are attended to calculate a context vector INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM1 Tone Prediction Sub-network As shown in Equation ( EQREF6 ), tone prediction sub-network ( INLINEFORM0 ) takes video and pinyin sequence as inputs and predict corresponding tone sequence. This problem is modeled as a sequence-to-sequence learning problem too. The corresponding model architecture is shown in Figure FIGREF8 . In order to take both video and pinyin information into consideration when producing tone, a dual attention mechanism BIBREF3 is employed. Two independent attention mechanisms are used for video and pinyin sequence. Video context vectors INLINEFORM0 and pinyin context vectors INLINEFORM1 are fused when predicting a tone character at each decoder step. The video encoder is the same as in Section SECREF7 and the pinyin encoder is: DISPLAYFORM0 The tone decoder takes both video encoder outputs and pinyin encoder outputs to calculate context vector, and then predicts tones: DISPLAYFORM0 DISPLAYFORM1 Character Prediction Sub-network The character prediction sub-network corresponds to INLINEFORM0 in Equation ( EQREF6 ). It considers all the pinyin sequence, tone sequence and video sequence when predicting Chinese character. Similarly, we also use attention based sequence-to-sequence architecture to model this equation. Here the attention mechanism is modified into triplet attention mechanism: DISPLAYFORM0 DISPLAYFORM1 For the following needs, the formula of tone encoder is also listed as follows: DISPLAYFORM0 CSSMCM Architecture The architecture of the proposed approach is demonstrated in Figure FIGREF32 . For better display, the three attention mechanisms are not shown in the figure. During the training of CSSMCM, the outputs of pinyin decoder are fed into pinyin encoder, the outputs of tone decoder into tone encoder: DISPLAYFORM0 DISPLAYFORM1 We replace Equation ( EQREF14 ) with Equation ( EQREF28 ), Equation ( EQREF26 ) with Equation ( EQREF29 ). Then, the three sub-networks are jointly trained and the overall loss function is defined as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 stand for loss of pinyin prediction sub-network, tone prediction sub-network and character prediction sub-network respectively, as defined below. DISPLAYFORM0 Training Strategy To accelerate training and reduce overfitting, curriculum learning BIBREF3 is employed. The sentences are grouped into subsets according to the length of less than 11, 12-17, 18-23, more than 24 Chinese characters. Scheduled sampling proposed by BIBREF11 is used to eliminate the discrepancy between training and inference. At the training stage, the sampling rate from the previous output is selected from 0.7 to 1. Greedy decoder is used for fast decoding. Dataset In this section, a three-stage pipeline for generating the Chinese Mandarin Lip Reading (CMLR) dataset is described, which includes video pre-processing, text acquisition, and data generation. This three-stage pipeline is similar to the method mentioned in BIBREF3 , but considering the characteristics of our Chinese Mandarin dataset, we have optimized some steps and parts to generate a better quality lip reading dataset. The three-stage pipeline is detailed below. Video Pre-processing. First, national news program "News Broadcast" recorded between June 2009 and June 2018 is obtained from China Network Television website. Then, the HOG-based face detection method is performed BIBREF12 , followed by an open source platform for face recognition and alignment. The video clip set of eleven different hosts who broadcast the news is captured. During the face detection step, using frame skipping can improve efficiency while ensuring the program quality. Text Acquisition. Since there is no subtitle or text annotation in the original "News Broadcast" program, FFmpeg tools are used to extract the corresponding audio track from the video clip set. Then through the iFLYTEK ASR, the corresponding text annotation of the video clip set is obtained. However, there is some noise in these text annotation. English letters, Arabic numerals, and rare punctuation are deleted to get a more pure Chinese Mandarin lip reading dataset. Data Generation. The text annotation acquired in the previous step also contains timestamp information. Therefore, video clip set is intercepted according to these timestamp information, and then the corresponding word, phrase, or sentence video segment of the text annotation are obtained. Since the text timestamp information may have a few uncertain errors, some adjustments are made to the start frame and the end frame when intercepting the video segment. It is worth noting that through experiments, we found that using OpenCV can capture clearer video segment than the FFmpeg tools. Through the three-stage pipeline mentioned above, we can obtain the Chinese Mandarin Lip Reading (CMLR) dataset containing more than 100,000 sentences, 25,000 phrases, 3,500 characters. The dataset is randomly divided into training set, validation set, and test set in a ratio of 7:1:2. Details are listed in Table TABREF37 . Implementation Details The input images are 64 INLINEFORM0 128 in dimension. Lip frames are transformed into gray-scale, and the VGG network takes every 5 lip frames as an input, moving 2 frames at each timestep. For all sub-networks, a two-layer bi-direction GRU BIBREF13 with a cell size of 256 is used for the encoder and a two-layer uni-direction GRU with a cell size of 512 for the decoder. For character and pinyin vocabulary, we keep characters and pinyin that appear more than 20 times. [sos], [eos] and [pad] are also included in these three vocabularies. The final vocabulary size is 371 for pinyin prediction sub-network, 8 for tone prediction sub-network (four tones plus a neutral tone), and 1,779 for character prediction sub-network. The initial learning rate was 0.0001 and decreased by 50% every time the training error did not improve for 4 epochs. CSSMCM is implemented using pytorch library and trained on a Quadro 64C P5000 with 16GB memory. The total end-to-end model was trained for around 12 days. Compared Methods and Evaluation Protocol WAS: The architecture used in BIBREF3 without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation. LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss BIBREF14 used in LipCH-Net BIBREF5 when converting picture to pinyin. CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character. We tried to implement the Lipnet architecture BIBREF2 to predict Chinese character at each timestep. However, the model did not converge. The possible reasons are due to the way CTC loss works and the difference between English and Chinese Mandarin. Compared to English, which only contains 26 characters, Chinese Mandarin contains thousands of Chinese characters. When CTC calculates loss, it first adds blank between every character in a sentence, that causes the number of the blank label is far more than any other Chinese character. Thus, when Lipnet starts training, it predicts only the blank label. After a certain epoch, "UTF8gbsn的" character will occasionally appear until the learning rate decays to close to zero. For all experiments, Character Error Rate (CER) and Pinyin Error Rate (PER) are used as evaluation metrics. CER is defined as INLINEFORM0 , where INLINEFORM1 is the number of substitutions, INLINEFORM2 is the number of deletions, INLINEFORM3 is the number of insertions to get from the reference to the hypothesis and INLINEFORM4 is the number of words in the reference. PER is calculated in the same way as CER. Tone Error Rate (TER) is also included when analyzing CSSMCM, which is calculated in the same way as above. Results Table TABREF40 shows a detailed comparison between various sub-network of different methods. Comparing P2T and VP2T, VP2T considers video information when predicting the pinyin sequence and achieves a lower error rate. This verifies the conjecture of BIBREF7 that the generation of tones is related to the motion of the head. In terms of overall performance, CSSMCM exceeds all the other architecture on the CMLR dataset and achieves 32.48% character error rate. It is worth noting that CSSMCM-w/o video achieves the worst result (42.23% CER) even though its sub-networks perform well when trained separately. This may be due to the lack of visual information to support, and the accumulation of errors. CSSMCM using tone information performs better compared to LipCH-Net-seq, which does not use tone information. The comparison results show that tone is important when lip reading, and when predicting tone, visual information should be considered. Table TABREF41 shows some generated sentences from different methods. CSSMCM-w/o video architecture is not included due to its relatively lower performance. These are sentences other methods fail to predict but CSSMCM succeeds. The phrase "UTF8gbsn实惠" (which means affordable) in the first example sentence, has a tone of 2, 4 and its corresponding pinyin are shi, hui. WAS predicts it as "UTF8gbsn事会" (which means opportunity). Although the pinyin prediction is correct, the tone is wrong. LipCH-Net-seq predicts "UTF8gbsn实惠" as "UTF8gbsn吃贵" (not a word), which have the same finals "ui" and the corresponding mouth shapes are the same. It's the same in the second example. "UTF8gbsn前, 天, 年" have the same finals and mouth shapes, but the tone is different. These show that when predicting characters with the same lip shape but different tones, other methods are often unable to predict correctly. However, CSSMCM can leverage the tone information to predict successfully. Apart from the above results, Table TABREF42 also lists some failure cases of CSSMCM. The characters that CSSMCM predicts wrong are usually homophones or characters with the same final as the ground truth. In the first example, "UTF8gbsn价" and "UTF8gbsn下" have the same final, ia, while "UTF8gbsn一" and "UTF8gbsn医" are homophones in the second example. Unlike English, if one character in an English word is predicted wrong, the understanding of the transcriptions has little effect. However, if there is a character predicted wrong in Chinese words, it will greatly affect the understandability of transcriptions. In the second example, CSSMCM mispredicts "UTF8gbsn医学" ( which means medical) to "UTF8gbsn一水" (which means all). Although their first characters are pronounced the same, the meaning of the sentence changed from Now with the progress of medical science and technology in our country to It is now with the footsteps of China's Yishui Technology. Attention Visualisation Figure FIGREF44 (a) and Figure FIGREF44 (b) visualise the alignment of video frames and Chinese characters predicted by CSSMCM and WAS respectively. The ground truth sequence is "UTF8gbsn同时他还向媒体表示". Comparing Figure FIGREF44 (a) with Figure FIGREF44 (b), the diagonal trend of the video attention map got by CSSMCM is more obvious. The video attention is more focused where WAS predicts wrong, i.e. the area corresponding to "UTF8gbsn还向". Although WAS mistakenly predicts the "UTF8gbsn媒体" as "UTF8gbsn么体", the "UTF8gbsn媒体" and the "UTF8gbsn么体" have the same mouth shape, so the attention concentrates on the correct frame. It's interesting to mention that in Figure FIGREF47 , when predicting the INLINEFORM0 -th character, attention is concentrated on the INLINEFORM1 -th tone. This may be because attention is applied to the outputs of the encoder, which actually includes all the information from the previous INLINEFORM2 timesteps. The attention to the tone of INLINEFORM3 -th timestep serves as the language model, which reduces the options for generating the character at INLINEFORM4 -th timestep, making prediction more accurate. Summary and Extension In this paper, we propose the CSSMCM, a Cascade Sequence-to-Sequence Model for Chinese Mandarin lip reading. CSSMCM is designed to predicting pinyin sequence, tone sequence, and Chinese character sequence one by one. When predicting tone sequence, a dual attention mechanism is used to consider video sequence and pinyin sequence at the same time. When predicting the Chinese character sequence, a triplet attention mechanism is proposed to take all the video sequence, pinyin sequence, and tone sequence information into consideration. CSSMCM consistently outperforms other lip reading architectures on the proposed CMLR dataset. Lip reading and speech recognition are very similar. In Chinese Mandarin speech recognition, there have been kinds of different acoustic representations like syllable initial/final approach, syllable initial/final with tone approach, syllable approach, syllable with tone approach, preme/toneme approach BIBREF15 and Chinese Character approach BIBREF16 . In this paper, the Chinese character is chosen as the output unit. However, we find that the wrongly predicted characters severely affect the understandability of transcriptions. Using larger output units, like Chinese words, maybe can alleviate this problem.
WAS, LipCH-Net-seq, CSSMCM-w/o video
27b01883ed947b457d3bab0c66de26c0736e4f90
27b01883ed947b457d3bab0c66de26c0736e4f90_0
Q: What syntactic structure is used to model tones? Text: Introduction Lip reading, also known as visual speech recognition, aims to predict the sentence being spoken, given a silent video of a talking face. In noisy environments, where speech recognition is difficult, visual speech recognition offers an alternative way to understand speech. Besides, lip reading has practical potential in improved hearing aids, security, and silent dictation in public spaces. Lip reading is essentially a difficult problem, as most lip reading actuations, besides the lips and sometimes tongue and teeth, are latent and ambiguous. Several seemingly identical lip movements can produce different words. Thanks to the recent development of deep learning, English-based lip reading methods have made great progress, at both word-level BIBREF0 , BIBREF1 and sentence-level BIBREF2 , BIBREF3 . However, as the language of the most number of speakers, there is only a little work for Chinese Mandarin lip reading in the multimedia community. Yang et al. BIBREF4 present a naturally-distributed large-scale benchmark for Chinese Mandarin lip-reading in the wild, named LRW-1000, which contains 1,000 classes with 718,018 samples from more than 2,000 individual speakers. Each class corresponds to the syllables of a Mandarin word composed of one or several Chinese characters. However, they perform only word classification for Chinese Mandarin lip reading but not at the complete sentence level. LipCH-Net BIBREF5 is the first paper aiming for sentence-level Chinese Mandarin lip reading. LipCH-Net is a two-step end-to-end architecture, in which two deep neural network models are employed to perform the recognition of Picture-to-Pinyin (mouth motion pictures to pronunciations) and the recognition of Pinyin-to-Hanzi (pronunciations to texts) respectively. Then a joint optimization is performed to improve the overall performance. Belong to two different language families, English and Chinese Mandarin have many differences. The most significant one might be that: Chinese Mandarin is a tone language, while English is not. The tone is the use of pitch in language to distinguish lexical or grammatical meaning - that is, to distinguish or to inflect words . Even two words look the same on the face when pronounced, they can have different tones, thus have different meanings. For example, even though "UTF8gbsn练习" (which means practice) and "UTF8gbsn联系" (which means contact) have different meanings, but they have the same mouth movement. This increases ambiguity when lip reading. So the tone is an important factor for Chinese Mandarin lip reading. Based on the above considerations, in this paper, we present CSSMCM, a sentence-level Chinese Mandarin lip reading network, which contains three sub-networks. Same as BIBREF5 , in the first sub-network, pinyin sequence is predicted from the video. Different from BIBREF5 , which predicts pinyin characters from video, pinyin is taken as a whole in CSSMCM, also known as syllables. As we know, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation. Compared with pinyin characters, syllables are a longer linguistic unit, and can reduce the difficulty of syllable choices in the decoder by sequence-to-sequence attention-based models BIBREF6 . Chen et al. BIBREF7 find that there might be a relationship between the production of lexical tones and the visible movements of the neck, head, and mouth. Motivated by this observation, in the second sub-network, both video and pinyin sequence is used as input to predict tone. Then in the third sub-network, video, pinyin, and tone sequence work together to predict the Chinese character sequence. At last, three sub-networks are jointly finetuned to improve overall performance. As there is no public sentence-level Chinese Mandarin lip reading dataset, we collect a new Chinese Mandarin Lip Reading dataset called CMLR based on China Network Television broadcasts containing talking faces together with subtitles of what is said. In summary, our major contributions are as follows. The Proposed Method In this section, we present CSSMCM, a lip reading model for Chinese Mandarin. As mention in Section SECREF1 , pinyin and tone are both important for Chinese Mandarin lip reading. Pinyin represents how to pronounce a Chinese character and is related to mouth movement. Tone can alleviate the ambiguity of visemes (several speech sounds that look the same) to some extent and can be inferred from visible movements. Based on this, the lip reading task is defined as follow: DISPLAYFORM0 The meaning of these symbols is given in Table TABREF5 . As shown in Equation ( EQREF6 ), the whole problem is divided into three parts, which corresponds to pinyin prediction, tone prediction, and character prediction separately. Each part will be described in detail below. Pinyin Prediction Sub-network The pinyin prediction sub-network transforms video sequence into pinyin sequence, which corresponds to INLINEFORM0 in Equation ( EQREF6 ). This sub-network is based on the sequence-to-sequence architecture with attention mechanism BIBREF8 . We name the encoder and decoder the video encoder and pinyin decoder, for the encoder process video sequence, and the decoder predicts pinyin sequence. The input video sequence is first fed into the VGG model BIBREF9 to extract visual feature. The output of conv5 of VGG is appended with global average pooling BIBREF10 to get the 512-dim feature vector. Then the 512-dim feature vector is fed into video encoder. The video encoder can be denoted as: DISPLAYFORM0 When predicting pinyin sequence, at each timestep INLINEFORM0 , video encoder outputs are attended to calculate a context vector INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM1 Tone Prediction Sub-network As shown in Equation ( EQREF6 ), tone prediction sub-network ( INLINEFORM0 ) takes video and pinyin sequence as inputs and predict corresponding tone sequence. This problem is modeled as a sequence-to-sequence learning problem too. The corresponding model architecture is shown in Figure FIGREF8 . In order to take both video and pinyin information into consideration when producing tone, a dual attention mechanism BIBREF3 is employed. Two independent attention mechanisms are used for video and pinyin sequence. Video context vectors INLINEFORM0 and pinyin context vectors INLINEFORM1 are fused when predicting a tone character at each decoder step. The video encoder is the same as in Section SECREF7 and the pinyin encoder is: DISPLAYFORM0 The tone decoder takes both video encoder outputs and pinyin encoder outputs to calculate context vector, and then predicts tones: DISPLAYFORM0 DISPLAYFORM1 Character Prediction Sub-network The character prediction sub-network corresponds to INLINEFORM0 in Equation ( EQREF6 ). It considers all the pinyin sequence, tone sequence and video sequence when predicting Chinese character. Similarly, we also use attention based sequence-to-sequence architecture to model this equation. Here the attention mechanism is modified into triplet attention mechanism: DISPLAYFORM0 DISPLAYFORM1 For the following needs, the formula of tone encoder is also listed as follows: DISPLAYFORM0 CSSMCM Architecture The architecture of the proposed approach is demonstrated in Figure FIGREF32 . For better display, the three attention mechanisms are not shown in the figure. During the training of CSSMCM, the outputs of pinyin decoder are fed into pinyin encoder, the outputs of tone decoder into tone encoder: DISPLAYFORM0 DISPLAYFORM1 We replace Equation ( EQREF14 ) with Equation ( EQREF28 ), Equation ( EQREF26 ) with Equation ( EQREF29 ). Then, the three sub-networks are jointly trained and the overall loss function is defined as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 stand for loss of pinyin prediction sub-network, tone prediction sub-network and character prediction sub-network respectively, as defined below. DISPLAYFORM0 Training Strategy To accelerate training and reduce overfitting, curriculum learning BIBREF3 is employed. The sentences are grouped into subsets according to the length of less than 11, 12-17, 18-23, more than 24 Chinese characters. Scheduled sampling proposed by BIBREF11 is used to eliminate the discrepancy between training and inference. At the training stage, the sampling rate from the previous output is selected from 0.7 to 1. Greedy decoder is used for fast decoding. Dataset In this section, a three-stage pipeline for generating the Chinese Mandarin Lip Reading (CMLR) dataset is described, which includes video pre-processing, text acquisition, and data generation. This three-stage pipeline is similar to the method mentioned in BIBREF3 , but considering the characteristics of our Chinese Mandarin dataset, we have optimized some steps and parts to generate a better quality lip reading dataset. The three-stage pipeline is detailed below. Video Pre-processing. First, national news program "News Broadcast" recorded between June 2009 and June 2018 is obtained from China Network Television website. Then, the HOG-based face detection method is performed BIBREF12 , followed by an open source platform for face recognition and alignment. The video clip set of eleven different hosts who broadcast the news is captured. During the face detection step, using frame skipping can improve efficiency while ensuring the program quality. Text Acquisition. Since there is no subtitle or text annotation in the original "News Broadcast" program, FFmpeg tools are used to extract the corresponding audio track from the video clip set. Then through the iFLYTEK ASR, the corresponding text annotation of the video clip set is obtained. However, there is some noise in these text annotation. English letters, Arabic numerals, and rare punctuation are deleted to get a more pure Chinese Mandarin lip reading dataset. Data Generation. The text annotation acquired in the previous step also contains timestamp information. Therefore, video clip set is intercepted according to these timestamp information, and then the corresponding word, phrase, or sentence video segment of the text annotation are obtained. Since the text timestamp information may have a few uncertain errors, some adjustments are made to the start frame and the end frame when intercepting the video segment. It is worth noting that through experiments, we found that using OpenCV can capture clearer video segment than the FFmpeg tools. Through the three-stage pipeline mentioned above, we can obtain the Chinese Mandarin Lip Reading (CMLR) dataset containing more than 100,000 sentences, 25,000 phrases, 3,500 characters. The dataset is randomly divided into training set, validation set, and test set in a ratio of 7:1:2. Details are listed in Table TABREF37 . Implementation Details The input images are 64 INLINEFORM0 128 in dimension. Lip frames are transformed into gray-scale, and the VGG network takes every 5 lip frames as an input, moving 2 frames at each timestep. For all sub-networks, a two-layer bi-direction GRU BIBREF13 with a cell size of 256 is used for the encoder and a two-layer uni-direction GRU with a cell size of 512 for the decoder. For character and pinyin vocabulary, we keep characters and pinyin that appear more than 20 times. [sos], [eos] and [pad] are also included in these three vocabularies. The final vocabulary size is 371 for pinyin prediction sub-network, 8 for tone prediction sub-network (four tones plus a neutral tone), and 1,779 for character prediction sub-network. The initial learning rate was 0.0001 and decreased by 50% every time the training error did not improve for 4 epochs. CSSMCM is implemented using pytorch library and trained on a Quadro 64C P5000 with 16GB memory. The total end-to-end model was trained for around 12 days. Compared Methods and Evaluation Protocol WAS: The architecture used in BIBREF3 without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation. LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss BIBREF14 used in LipCH-Net BIBREF5 when converting picture to pinyin. CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character. We tried to implement the Lipnet architecture BIBREF2 to predict Chinese character at each timestep. However, the model did not converge. The possible reasons are due to the way CTC loss works and the difference between English and Chinese Mandarin. Compared to English, which only contains 26 characters, Chinese Mandarin contains thousands of Chinese characters. When CTC calculates loss, it first adds blank between every character in a sentence, that causes the number of the blank label is far more than any other Chinese character. Thus, when Lipnet starts training, it predicts only the blank label. After a certain epoch, "UTF8gbsn的" character will occasionally appear until the learning rate decays to close to zero. For all experiments, Character Error Rate (CER) and Pinyin Error Rate (PER) are used as evaluation metrics. CER is defined as INLINEFORM0 , where INLINEFORM1 is the number of substitutions, INLINEFORM2 is the number of deletions, INLINEFORM3 is the number of insertions to get from the reference to the hypothesis and INLINEFORM4 is the number of words in the reference. PER is calculated in the same way as CER. Tone Error Rate (TER) is also included when analyzing CSSMCM, which is calculated in the same way as above. Results Table TABREF40 shows a detailed comparison between various sub-network of different methods. Comparing P2T and VP2T, VP2T considers video information when predicting the pinyin sequence and achieves a lower error rate. This verifies the conjecture of BIBREF7 that the generation of tones is related to the motion of the head. In terms of overall performance, CSSMCM exceeds all the other architecture on the CMLR dataset and achieves 32.48% character error rate. It is worth noting that CSSMCM-w/o video achieves the worst result (42.23% CER) even though its sub-networks perform well when trained separately. This may be due to the lack of visual information to support, and the accumulation of errors. CSSMCM using tone information performs better compared to LipCH-Net-seq, which does not use tone information. The comparison results show that tone is important when lip reading, and when predicting tone, visual information should be considered. Table TABREF41 shows some generated sentences from different methods. CSSMCM-w/o video architecture is not included due to its relatively lower performance. These are sentences other methods fail to predict but CSSMCM succeeds. The phrase "UTF8gbsn实惠" (which means affordable) in the first example sentence, has a tone of 2, 4 and its corresponding pinyin are shi, hui. WAS predicts it as "UTF8gbsn事会" (which means opportunity). Although the pinyin prediction is correct, the tone is wrong. LipCH-Net-seq predicts "UTF8gbsn实惠" as "UTF8gbsn吃贵" (not a word), which have the same finals "ui" and the corresponding mouth shapes are the same. It's the same in the second example. "UTF8gbsn前, 天, 年" have the same finals and mouth shapes, but the tone is different. These show that when predicting characters with the same lip shape but different tones, other methods are often unable to predict correctly. However, CSSMCM can leverage the tone information to predict successfully. Apart from the above results, Table TABREF42 also lists some failure cases of CSSMCM. The characters that CSSMCM predicts wrong are usually homophones or characters with the same final as the ground truth. In the first example, "UTF8gbsn价" and "UTF8gbsn下" have the same final, ia, while "UTF8gbsn一" and "UTF8gbsn医" are homophones in the second example. Unlike English, if one character in an English word is predicted wrong, the understanding of the transcriptions has little effect. However, if there is a character predicted wrong in Chinese words, it will greatly affect the understandability of transcriptions. In the second example, CSSMCM mispredicts "UTF8gbsn医学" ( which means medical) to "UTF8gbsn一水" (which means all). Although their first characters are pronounced the same, the meaning of the sentence changed from Now with the progress of medical science and technology in our country to It is now with the footsteps of China's Yishui Technology. Attention Visualisation Figure FIGREF44 (a) and Figure FIGREF44 (b) visualise the alignment of video frames and Chinese characters predicted by CSSMCM and WAS respectively. The ground truth sequence is "UTF8gbsn同时他还向媒体表示". Comparing Figure FIGREF44 (a) with Figure FIGREF44 (b), the diagonal trend of the video attention map got by CSSMCM is more obvious. The video attention is more focused where WAS predicts wrong, i.e. the area corresponding to "UTF8gbsn还向". Although WAS mistakenly predicts the "UTF8gbsn媒体" as "UTF8gbsn么体", the "UTF8gbsn媒体" and the "UTF8gbsn么体" have the same mouth shape, so the attention concentrates on the correct frame. It's interesting to mention that in Figure FIGREF47 , when predicting the INLINEFORM0 -th character, attention is concentrated on the INLINEFORM1 -th tone. This may be because attention is applied to the outputs of the encoder, which actually includes all the information from the previous INLINEFORM2 timesteps. The attention to the tone of INLINEFORM3 -th timestep serves as the language model, which reduces the options for generating the character at INLINEFORM4 -th timestep, making prediction more accurate. Summary and Extension In this paper, we propose the CSSMCM, a Cascade Sequence-to-Sequence Model for Chinese Mandarin lip reading. CSSMCM is designed to predicting pinyin sequence, tone sequence, and Chinese character sequence one by one. When predicting tone sequence, a dual attention mechanism is used to consider video sequence and pinyin sequence at the same time. When predicting the Chinese character sequence, a triplet attention mechanism is proposed to take all the video sequence, pinyin sequence, and tone sequence information into consideration. CSSMCM consistently outperforms other lip reading architectures on the proposed CMLR dataset. Lip reading and speech recognition are very similar. In Chinese Mandarin speech recognition, there have been kinds of different acoustic representations like syllable initial/final approach, syllable initial/final with tone approach, syllable approach, syllable with tone approach, preme/toneme approach BIBREF15 and Chinese Character approach BIBREF16 . In this paper, the Chinese character is chosen as the output unit. However, we find that the wrongly predicted characters severely affect the understandability of transcriptions. Using larger output units, like Chinese words, maybe can alleviate this problem.
syllables
9714cb7203c18a0c53805f6c889f2e20b4cab5dd
9714cb7203c18a0c53805f6c889f2e20b4cab5dd_0
Q: What visual information characterizes tones? Text: Introduction Lip reading, also known as visual speech recognition, aims to predict the sentence being spoken, given a silent video of a talking face. In noisy environments, where speech recognition is difficult, visual speech recognition offers an alternative way to understand speech. Besides, lip reading has practical potential in improved hearing aids, security, and silent dictation in public spaces. Lip reading is essentially a difficult problem, as most lip reading actuations, besides the lips and sometimes tongue and teeth, are latent and ambiguous. Several seemingly identical lip movements can produce different words. Thanks to the recent development of deep learning, English-based lip reading methods have made great progress, at both word-level BIBREF0 , BIBREF1 and sentence-level BIBREF2 , BIBREF3 . However, as the language of the most number of speakers, there is only a little work for Chinese Mandarin lip reading in the multimedia community. Yang et al. BIBREF4 present a naturally-distributed large-scale benchmark for Chinese Mandarin lip-reading in the wild, named LRW-1000, which contains 1,000 classes with 718,018 samples from more than 2,000 individual speakers. Each class corresponds to the syllables of a Mandarin word composed of one or several Chinese characters. However, they perform only word classification for Chinese Mandarin lip reading but not at the complete sentence level. LipCH-Net BIBREF5 is the first paper aiming for sentence-level Chinese Mandarin lip reading. LipCH-Net is a two-step end-to-end architecture, in which two deep neural network models are employed to perform the recognition of Picture-to-Pinyin (mouth motion pictures to pronunciations) and the recognition of Pinyin-to-Hanzi (pronunciations to texts) respectively. Then a joint optimization is performed to improve the overall performance. Belong to two different language families, English and Chinese Mandarin have many differences. The most significant one might be that: Chinese Mandarin is a tone language, while English is not. The tone is the use of pitch in language to distinguish lexical or grammatical meaning - that is, to distinguish or to inflect words . Even two words look the same on the face when pronounced, they can have different tones, thus have different meanings. For example, even though "UTF8gbsn练习" (which means practice) and "UTF8gbsn联系" (which means contact) have different meanings, but they have the same mouth movement. This increases ambiguity when lip reading. So the tone is an important factor for Chinese Mandarin lip reading. Based on the above considerations, in this paper, we present CSSMCM, a sentence-level Chinese Mandarin lip reading network, which contains three sub-networks. Same as BIBREF5 , in the first sub-network, pinyin sequence is predicted from the video. Different from BIBREF5 , which predicts pinyin characters from video, pinyin is taken as a whole in CSSMCM, also known as syllables. As we know, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation. Compared with pinyin characters, syllables are a longer linguistic unit, and can reduce the difficulty of syllable choices in the decoder by sequence-to-sequence attention-based models BIBREF6 . Chen et al. BIBREF7 find that there might be a relationship between the production of lexical tones and the visible movements of the neck, head, and mouth. Motivated by this observation, in the second sub-network, both video and pinyin sequence is used as input to predict tone. Then in the third sub-network, video, pinyin, and tone sequence work together to predict the Chinese character sequence. At last, three sub-networks are jointly finetuned to improve overall performance. As there is no public sentence-level Chinese Mandarin lip reading dataset, we collect a new Chinese Mandarin Lip Reading dataset called CMLR based on China Network Television broadcasts containing talking faces together with subtitles of what is said. In summary, our major contributions are as follows. The Proposed Method In this section, we present CSSMCM, a lip reading model for Chinese Mandarin. As mention in Section SECREF1 , pinyin and tone are both important for Chinese Mandarin lip reading. Pinyin represents how to pronounce a Chinese character and is related to mouth movement. Tone can alleviate the ambiguity of visemes (several speech sounds that look the same) to some extent and can be inferred from visible movements. Based on this, the lip reading task is defined as follow: DISPLAYFORM0 The meaning of these symbols is given in Table TABREF5 . As shown in Equation ( EQREF6 ), the whole problem is divided into three parts, which corresponds to pinyin prediction, tone prediction, and character prediction separately. Each part will be described in detail below. Pinyin Prediction Sub-network The pinyin prediction sub-network transforms video sequence into pinyin sequence, which corresponds to INLINEFORM0 in Equation ( EQREF6 ). This sub-network is based on the sequence-to-sequence architecture with attention mechanism BIBREF8 . We name the encoder and decoder the video encoder and pinyin decoder, for the encoder process video sequence, and the decoder predicts pinyin sequence. The input video sequence is first fed into the VGG model BIBREF9 to extract visual feature. The output of conv5 of VGG is appended with global average pooling BIBREF10 to get the 512-dim feature vector. Then the 512-dim feature vector is fed into video encoder. The video encoder can be denoted as: DISPLAYFORM0 When predicting pinyin sequence, at each timestep INLINEFORM0 , video encoder outputs are attended to calculate a context vector INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM1 Tone Prediction Sub-network As shown in Equation ( EQREF6 ), tone prediction sub-network ( INLINEFORM0 ) takes video and pinyin sequence as inputs and predict corresponding tone sequence. This problem is modeled as a sequence-to-sequence learning problem too. The corresponding model architecture is shown in Figure FIGREF8 . In order to take both video and pinyin information into consideration when producing tone, a dual attention mechanism BIBREF3 is employed. Two independent attention mechanisms are used for video and pinyin sequence. Video context vectors INLINEFORM0 and pinyin context vectors INLINEFORM1 are fused when predicting a tone character at each decoder step. The video encoder is the same as in Section SECREF7 and the pinyin encoder is: DISPLAYFORM0 The tone decoder takes both video encoder outputs and pinyin encoder outputs to calculate context vector, and then predicts tones: DISPLAYFORM0 DISPLAYFORM1 Character Prediction Sub-network The character prediction sub-network corresponds to INLINEFORM0 in Equation ( EQREF6 ). It considers all the pinyin sequence, tone sequence and video sequence when predicting Chinese character. Similarly, we also use attention based sequence-to-sequence architecture to model this equation. Here the attention mechanism is modified into triplet attention mechanism: DISPLAYFORM0 DISPLAYFORM1 For the following needs, the formula of tone encoder is also listed as follows: DISPLAYFORM0 CSSMCM Architecture The architecture of the proposed approach is demonstrated in Figure FIGREF32 . For better display, the three attention mechanisms are not shown in the figure. During the training of CSSMCM, the outputs of pinyin decoder are fed into pinyin encoder, the outputs of tone decoder into tone encoder: DISPLAYFORM0 DISPLAYFORM1 We replace Equation ( EQREF14 ) with Equation ( EQREF28 ), Equation ( EQREF26 ) with Equation ( EQREF29 ). Then, the three sub-networks are jointly trained and the overall loss function is defined as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 stand for loss of pinyin prediction sub-network, tone prediction sub-network and character prediction sub-network respectively, as defined below. DISPLAYFORM0 Training Strategy To accelerate training and reduce overfitting, curriculum learning BIBREF3 is employed. The sentences are grouped into subsets according to the length of less than 11, 12-17, 18-23, more than 24 Chinese characters. Scheduled sampling proposed by BIBREF11 is used to eliminate the discrepancy between training and inference. At the training stage, the sampling rate from the previous output is selected from 0.7 to 1. Greedy decoder is used for fast decoding. Dataset In this section, a three-stage pipeline for generating the Chinese Mandarin Lip Reading (CMLR) dataset is described, which includes video pre-processing, text acquisition, and data generation. This three-stage pipeline is similar to the method mentioned in BIBREF3 , but considering the characteristics of our Chinese Mandarin dataset, we have optimized some steps and parts to generate a better quality lip reading dataset. The three-stage pipeline is detailed below. Video Pre-processing. First, national news program "News Broadcast" recorded between June 2009 and June 2018 is obtained from China Network Television website. Then, the HOG-based face detection method is performed BIBREF12 , followed by an open source platform for face recognition and alignment. The video clip set of eleven different hosts who broadcast the news is captured. During the face detection step, using frame skipping can improve efficiency while ensuring the program quality. Text Acquisition. Since there is no subtitle or text annotation in the original "News Broadcast" program, FFmpeg tools are used to extract the corresponding audio track from the video clip set. Then through the iFLYTEK ASR, the corresponding text annotation of the video clip set is obtained. However, there is some noise in these text annotation. English letters, Arabic numerals, and rare punctuation are deleted to get a more pure Chinese Mandarin lip reading dataset. Data Generation. The text annotation acquired in the previous step also contains timestamp information. Therefore, video clip set is intercepted according to these timestamp information, and then the corresponding word, phrase, or sentence video segment of the text annotation are obtained. Since the text timestamp information may have a few uncertain errors, some adjustments are made to the start frame and the end frame when intercepting the video segment. It is worth noting that through experiments, we found that using OpenCV can capture clearer video segment than the FFmpeg tools. Through the three-stage pipeline mentioned above, we can obtain the Chinese Mandarin Lip Reading (CMLR) dataset containing more than 100,000 sentences, 25,000 phrases, 3,500 characters. The dataset is randomly divided into training set, validation set, and test set in a ratio of 7:1:2. Details are listed in Table TABREF37 . Implementation Details The input images are 64 INLINEFORM0 128 in dimension. Lip frames are transformed into gray-scale, and the VGG network takes every 5 lip frames as an input, moving 2 frames at each timestep. For all sub-networks, a two-layer bi-direction GRU BIBREF13 with a cell size of 256 is used for the encoder and a two-layer uni-direction GRU with a cell size of 512 for the decoder. For character and pinyin vocabulary, we keep characters and pinyin that appear more than 20 times. [sos], [eos] and [pad] are also included in these three vocabularies. The final vocabulary size is 371 for pinyin prediction sub-network, 8 for tone prediction sub-network (four tones plus a neutral tone), and 1,779 for character prediction sub-network. The initial learning rate was 0.0001 and decreased by 50% every time the training error did not improve for 4 epochs. CSSMCM is implemented using pytorch library and trained on a Quadro 64C P5000 with 16GB memory. The total end-to-end model was trained for around 12 days. Compared Methods and Evaluation Protocol WAS: The architecture used in BIBREF3 without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation. LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss BIBREF14 used in LipCH-Net BIBREF5 when converting picture to pinyin. CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character. We tried to implement the Lipnet architecture BIBREF2 to predict Chinese character at each timestep. However, the model did not converge. The possible reasons are due to the way CTC loss works and the difference between English and Chinese Mandarin. Compared to English, which only contains 26 characters, Chinese Mandarin contains thousands of Chinese characters. When CTC calculates loss, it first adds blank between every character in a sentence, that causes the number of the blank label is far more than any other Chinese character. Thus, when Lipnet starts training, it predicts only the blank label. After a certain epoch, "UTF8gbsn的" character will occasionally appear until the learning rate decays to close to zero. For all experiments, Character Error Rate (CER) and Pinyin Error Rate (PER) are used as evaluation metrics. CER is defined as INLINEFORM0 , where INLINEFORM1 is the number of substitutions, INLINEFORM2 is the number of deletions, INLINEFORM3 is the number of insertions to get from the reference to the hypothesis and INLINEFORM4 is the number of words in the reference. PER is calculated in the same way as CER. Tone Error Rate (TER) is also included when analyzing CSSMCM, which is calculated in the same way as above. Results Table TABREF40 shows a detailed comparison between various sub-network of different methods. Comparing P2T and VP2T, VP2T considers video information when predicting the pinyin sequence and achieves a lower error rate. This verifies the conjecture of BIBREF7 that the generation of tones is related to the motion of the head. In terms of overall performance, CSSMCM exceeds all the other architecture on the CMLR dataset and achieves 32.48% character error rate. It is worth noting that CSSMCM-w/o video achieves the worst result (42.23% CER) even though its sub-networks perform well when trained separately. This may be due to the lack of visual information to support, and the accumulation of errors. CSSMCM using tone information performs better compared to LipCH-Net-seq, which does not use tone information. The comparison results show that tone is important when lip reading, and when predicting tone, visual information should be considered. Table TABREF41 shows some generated sentences from different methods. CSSMCM-w/o video architecture is not included due to its relatively lower performance. These are sentences other methods fail to predict but CSSMCM succeeds. The phrase "UTF8gbsn实惠" (which means affordable) in the first example sentence, has a tone of 2, 4 and its corresponding pinyin are shi, hui. WAS predicts it as "UTF8gbsn事会" (which means opportunity). Although the pinyin prediction is correct, the tone is wrong. LipCH-Net-seq predicts "UTF8gbsn实惠" as "UTF8gbsn吃贵" (not a word), which have the same finals "ui" and the corresponding mouth shapes are the same. It's the same in the second example. "UTF8gbsn前, 天, 年" have the same finals and mouth shapes, but the tone is different. These show that when predicting characters with the same lip shape but different tones, other methods are often unable to predict correctly. However, CSSMCM can leverage the tone information to predict successfully. Apart from the above results, Table TABREF42 also lists some failure cases of CSSMCM. The characters that CSSMCM predicts wrong are usually homophones or characters with the same final as the ground truth. In the first example, "UTF8gbsn价" and "UTF8gbsn下" have the same final, ia, while "UTF8gbsn一" and "UTF8gbsn医" are homophones in the second example. Unlike English, if one character in an English word is predicted wrong, the understanding of the transcriptions has little effect. However, if there is a character predicted wrong in Chinese words, it will greatly affect the understandability of transcriptions. In the second example, CSSMCM mispredicts "UTF8gbsn医学" ( which means medical) to "UTF8gbsn一水" (which means all). Although their first characters are pronounced the same, the meaning of the sentence changed from Now with the progress of medical science and technology in our country to It is now with the footsteps of China's Yishui Technology. Attention Visualisation Figure FIGREF44 (a) and Figure FIGREF44 (b) visualise the alignment of video frames and Chinese characters predicted by CSSMCM and WAS respectively. The ground truth sequence is "UTF8gbsn同时他还向媒体表示". Comparing Figure FIGREF44 (a) with Figure FIGREF44 (b), the diagonal trend of the video attention map got by CSSMCM is more obvious. The video attention is more focused where WAS predicts wrong, i.e. the area corresponding to "UTF8gbsn还向". Although WAS mistakenly predicts the "UTF8gbsn媒体" as "UTF8gbsn么体", the "UTF8gbsn媒体" and the "UTF8gbsn么体" have the same mouth shape, so the attention concentrates on the correct frame. It's interesting to mention that in Figure FIGREF47 , when predicting the INLINEFORM0 -th character, attention is concentrated on the INLINEFORM1 -th tone. This may be because attention is applied to the outputs of the encoder, which actually includes all the information from the previous INLINEFORM2 timesteps. The attention to the tone of INLINEFORM3 -th timestep serves as the language model, which reduces the options for generating the character at INLINEFORM4 -th timestep, making prediction more accurate. Summary and Extension In this paper, we propose the CSSMCM, a Cascade Sequence-to-Sequence Model for Chinese Mandarin lip reading. CSSMCM is designed to predicting pinyin sequence, tone sequence, and Chinese character sequence one by one. When predicting tone sequence, a dual attention mechanism is used to consider video sequence and pinyin sequence at the same time. When predicting the Chinese character sequence, a triplet attention mechanism is proposed to take all the video sequence, pinyin sequence, and tone sequence information into consideration. CSSMCM consistently outperforms other lip reading architectures on the proposed CMLR dataset. Lip reading and speech recognition are very similar. In Chinese Mandarin speech recognition, there have been kinds of different acoustic representations like syllable initial/final approach, syllable initial/final with tone approach, syllable approach, syllable with tone approach, preme/toneme approach BIBREF15 and Chinese Character approach BIBREF16 . In this paper, the Chinese character is chosen as the output unit. However, we find that the wrongly predicted characters severely affect the understandability of transcriptions. Using larger output units, like Chinese words, maybe can alleviate this problem.
video sequence is first fed into the VGG model BIBREF9 to extract visual feature
a22b900fcd76c3d36b5679691982dc6e9a3d34bf
a22b900fcd76c3d36b5679691982dc6e9a3d34bf_0
Q: Do they report results only on English data? Text: Introduction In recent years we have witnessed a great surge in activity in the area of computational argument analysis (e.g. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 ), and the emergence of dedicated venues such as the ACL Argument Mining workshop series starting in 2014 BIBREF4 . Argumentative relation classification is a sub-task of argument analysis that aims to determine relations between argumentative units A and B, for example, A supports B; A attacks B. Consider the following argumentative units (1) and (2), given the topic (0) “Marijuana should be legalized”: This example is modeled in Figure FIGREF3 . It is clear that (1) has a negative stance towards the topic and (2) has a positive stance towards the topic. Moreover, we can say that (2) attacks (1). In discourse, such a relation is often made explicit through discourse markers: (1). However, (2); On the one hand (1), on the other (2); (1), although (2); Admittedly, (2); etc. In the absence of such markers we must determine this relation by assessing the semantics of the individual argumentative units, including (often implicit) world knowledge about how they are related to each other. In this work, we show that argumentative relation classifiers – when provided with textual context surrounding an argumentative unit's span – are very prone to neglect the actual textual content of the EAU span. Instead they heavily rely on contextual markers, such as conjunctions or adverbials, as a basis for prediction. We argue that a system's capacity of predicting the correct relation based on the argumentative units' content is important in many circumstances, e.g., when an argumentative debate crosses document boundaries. For example, the prohibition of marijuana debate extends across populations and countries – argumentative units for this debate can be recovered from thousands of documents scattered across the world wide web. As a consequence, argumentative relation classification systems should not be (immensely) dependent on contextual clues – in the discussed cross-document setting these clues may even be misleading for such a system, since source and target arguments can be embedded in different textual contexts (e.g., when (1) and (2) stem from different documents it is easy to imagine a textual context where (2) is not introduced by however but instead by an `inverse' form such as e.g. moreover). Related Work It is well-known that the rhetorical and argumentative structure of texts bear great similarities. For example, BIBREF5 , BIBREF6 , BIBREF0 observe that elementary discourse units (EDUs) in RST BIBREF7 share great similarity with elementary argumentative units (EAUs) in argumentation analysis. BIBREF8 experiment with a modified version of the Microtext corpus BIBREF9 , which is an extensively annotated albeit small corpus. Similar to us, they separate argumentative units from discursive contextual markers. While BIBREF8 conduct a human evaluation to investigate the separation of Logos and Pathos aspects of arguments, our work investigates how (de-)contextualization of argumentative units affects automatic argumentative relation classification models. Argumentative Relation Prediction: Models and Features In this section, we describe different formulations of the argumentative relation classification task and describe features used by our replicated model. In order to test our hypotheses, we propose to group all features into three distinct types. Models Now, we introduce a classification of three different prediction models used in the argumentative relation prediction literature. We will inspect all of them and show that all can suffer from severe issues when focusing (too much) on the context. The model INLINEFORM0 adopts a discourse parsing view on argumentative relation prediction and predicts one outgoing edge for an argumentative unit (one-outgoing edge). Model INLINEFORM1 assumes a connected graph with argumentative units and is tasked with predicting edge labels for unit tuples (labeling relations in a graph). Finally, a model INLINEFORM2 is given two (possibly) unrelated argumentative units and is tasked with predicting connections as well as edge labels (joint edge prediction and labeling). BIBREF13 divide the task into relation prediction INLINEFORM0 and relation class assignment INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM0 which the authors describe as argumentative relation identification ( INLINEFORM0 ) and stance detection ( INLINEFORM1 ). In their experiments, INLINEFORM2 , i.e., no distinction is made between features that access only the argument content (EAU span) or only the EAU's embedding context, and some features also consider both (e.g., discourse features). This model adopts a parsing view on argumentative relation classification: every unit is allowed to have only one type of outgoing relation (this follows trivially from the fact that INLINEFORM3 has only one input). Applying such a model to argumentative attack and support relations might impose unrealistic constraints on the resulting argumentation graph: A given premise might in fact attack or support several other premises. The approach may suffice for the case of student argumentative essays, where EAUs are well-framed in a discourse structure, but seems overly restrictive for many other scenarios. Another way of framing the task, is to learn a function DISPLAYFORM0 Here, an argumentative unit is allowed to be in a attack or support relation to multiple other EAUs. Yet, both INLINEFORM0 and INLINEFORM1 assume that inputs are already linked and only the class of the link is unknown. Thus, we might also model the task in a three-class classification setting to learn a more general function that performs relation prediction and classification jointly (see also, e.g., BIBREF10 ): DISPLAYFORM0 The model described by Eq. EQREF22 is the most general one: not only does it assume a graph view on argumentative units and their relations (as does Eq. EQREF20 ); in model formulation (Eq. EQREF22 ), an argumentative unit can have no or multiple support or attack relations. It naturally allows for cases where an argumentative unit INLINEFORM0 (supports INLINEFORM1 INLINEFORM2 attacks INLINEFORM3 INLINEFORM4 is-unrelated-to INLINEFORM5 ). Given a set of EAUs mined from different documents, this model enables us to construct a full-fledged argumentation graph. Feature implementation Our feature implementation follows the feature descriptions for Stance recognition and link identification in BIBREF13 . These features and variations of them have been used successfully in several successive works (cf. BIBREF1 , BIBREF16 , BIBREF15 ). For any model the features are indexed by INLINEFORM0 . We create a function INLINEFORM1 which maps from feature indices to feature types. In other words, INLINEFORM2 tells us, for any given feature, whether it is content-based ( INLINEFORM3 ), content-ignorant ( INLINEFORM4 ) or full access ( INLINEFORM5 ). The features for, e.g., the joint prediction model INLINEFORM6 of type INLINEFORM7 ( INLINEFORM8 ) can then simply be described as INLINEFORM9 . Recall that features computed on the basis of the EAU span are content-based ( INLINEFORM10 ), features from the EAU-surrounding text are content-ignorant ( INLINEFORM11 ) and features computed from both are denoted by full-access ( INLINEFORM12 ). Details on the extraction of features are provided below. These consist of boolean values indicating whether a certain word appears in the argumentative source or target EAU or both (and separately, their contexts). More precisely, for any classification instance we extract uni-grams from within the span of the EAU (if INLINEFORM0 ) or solely from the sentence-context surrounding the EAUs (if INLINEFORM1 ). Words which occur in both bags are only visible in the full-access setup INLINEFORM2 and are modeled as binary indicators. Such features consist of syntactic production rules extracted from constituency trees – they are modelled analogously to the lexical features as a bag of production rules. To make a clear division between features derived from the EAU embedding context and features derived from within the EAU span, we divide the constituency tree in two parts, as is illustrated in Figure FIGREF26 . If the EAU is embedded in a covering sentence, we cut the syntax tree at the corresponding edge ( in Figure FIGREF26 ). In this example, the content-ignorant ( INLINEFORM0 ) bag-of-word production rule representation includes the rules INLINEFORM1 and INLINEFORM2 . Analogously to the lexical features, the production rules are modeled as binary indicator features. These features describe shallow statistics such as the ratio of argumentative unit tokens compared to sentence tokens or the position of the argumentative unit in the paragraph. We set these features to zero for the content representation of the argumentative unit and replicate those features that allow us to treat the argumentative unit as a black-box. For example, in the content-based ( INLINEFORM0 ) system that has access only to the EAU, we can compute the #tokens in the EAU, but not the #tokens in EAU divided by #tokens in the sentence. The latter feature is only accessible in the full access system variants. Hence, in the content-based ( INLINEFORM1 ) system most of these statistics are set to zero since they cannot be computed by considering only the EAU span. For the content-based representation we retrieve only discourse relations that are confined within the span of the argumentative unit. In the very frequent case that discourse features cross the boundaries of embedding context and EAU span, we only take them into account for INLINEFORM0 . We use the element-wise sum of 300-dimensional pre-trained GloVe vectors BIBREF24 corresponding to the words within the EAU span ( INLINEFORM0 ) and the words of the EAU-surrounding context ( INLINEFORM1 ). Additionally, we compute the element-wise subtraction of the source EAU vector from the target EAU vector, with the aim of modelling directions in distributional space, similarly to BIBREF25 . Words with no corresponding pre-trained word vector and empty sequences (e.g., no preceding context available) are treated as a zero-vector. Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors. Results Our first step towards our main experiments is to replicate the competitive argumentative relation classifier of BIBREF13 , BIBREF1 . Hence, for comparison purposes, we first formulate the task exactly as it was done in this prior work, using the model formulation in Eq. EQREF17 , which determines the type of outgoing edge from a source (i.e., tree-like view). The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features. The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 ). At first glance, the results of the purely EAU focused systems ( INLINEFORM0 ) are disappointing, since they fall far behind their competitors. On the other hand, their F1 scores are not devastatingly bad. The strong most-frequent-class baseline is significantly outperformed by the content-based ( INLINEFORM1 ) system, across all three prediction settings. In summary our findings are as follows: (i) models which see the EAU span (content-based, INLINEFORM0 ) are significantly outperformed by models that have no access to the span itself (content-ignorant, INLINEFORM1 ) across all settings; (ii) in two of three prediction settings ( INLINEFORM2 and INLINEFORM3 ), the model which only has access to the context even outperforms the model that has access to all information in the input. The fact that using features derived exclusively from the EAU embedding context ( INLINEFORM4 ) can lead to better results than using a full feature-system ( INLINEFORM5 ) suggests that some information from the EAU can even be harmful. Why this is the case, we cannot answer exactly. A plausible cause might be related to the smaller dimension of the feature space, which makes the SVM less likely to overfit. Still, this finding comes as a surprise and calls for further investigation in future work. A system for argumentative relation classification can be applied in one of two settings: single-document or cross-document, as illustrated in Figure FIGREF42 : in the first case (top), a system is tasked to classify EAUs that appear linearly in one document – here contextual clues can often highlight the relationship between two units. This is the setting we have been considering up to now. However, in the second scenario (bottom), we have moved away from the closed single-document setting and ask the system to classify two EAUs extracted from different document contexts. This setting applies, for instance, when we are mining arguments from multiple sources. In both cases, however, a system that relies more on contextual clues than on the content expressed in the EAUs is problematic: in the single-document setting, such a system will rely on discourse indicators – whether or not they are justified by content – and can thus easily be fooled. In the cross-document setting, discourse-based indicators – being inherently defined with respect to their internal document context – do not have a defined rhetorical function with respect to EAUs in a separate document and thus a system that has learned to rely on such markers within a single-document setting can be seriously misled. We believe that the cross-document setting should be an important goal in argumentation analysis, since it generalizes better to many debates of interest, where EAUs can be found scattered across thousands of documents. For example, for the topic of legalizing marijuana, EAUs may be mined from millions of documents and thus their relations may naturally extend across document boundaries. If a system learns to over-proportionally attend to the EAUs' surrounding contexts it is prone to making many errors. In what follows we are simulating the effects that an overly context-sensitive classifier could have in a cross-document setting, by modifying our experimental setting, and study the effects on the different model types: In one setup – we call it randomized-context – we systematically distort the context of our testing instances by exchanging the context in a randomized manner; in the other setting – called no-context, we are deleting the context around the ADUs to be classified. Randomized-context simulates an open world debate where argumentative units may occur in different contexts, sometimes with discourse markers indicating an opposite class. In other words, in this setting we want to examine effects when porting a context-sensitive system to a multi-document setting. For example, as seen in Figure FIGREF42 , the context of an argumentative unit may change from “However” to “Moreover” – which can happen naturally in open debates. The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model. We calculate the ANOVA classification F scores of the features with respect to our three task formulations INLINEFORM0 and INLINEFORM1 . The F percentiles of features extracted from the EAU surrounding text ( INLINEFORM2 ) and features extracted from the EAU span ( INLINEFORM3 ), are displayed in Figure FIGREF50 . It clearly stands out that features obtained from the EAU surrounding context ( INLINEFORM0 ) are assigned much higher scores compared to features stemming from the EAU span ( INLINEFORM1 ). This holds true for all three task formulations and provides further evidence that models – when given the option – put a strong focus on contextual clues while neglecting the information provided by the EAU span itself. Discussion While competitive systems for argumentative relation classification are considered to be robust, our experiments have shown that despite confidence-inspiring scores on unseen testing data, such systems can easily be fooled – they can deliver strong performance scores although the classifier does not have access to the content of the EAUs. In this respect, we have provided evidence that there is a danger in case models focus too much on rhetorical indicators, in detriment of the context. Thus, the following question arises: How can we prevent argumentation models from modeling arguments or argumentative units and their relations in overly naïve ways? A simple and intuitive way is to dissect EAUs from their surrounding document context. Models trained on data that is restricted to the EAUs' content will be forced to focus on the content of EAUs. We believe that this will enhance the robustness of such models and allows them to generalize to cross-document argument relation classification. The corpus of student essays makes such transformations straightforward: only the EAUs were annotated (e.g., “However, INLINEFORM0 A INLINEFORM1 ”). If annotations extend over the EAUs (e.g., only full sentences are annotated, “ INLINEFORM2 However, A INLINEFORM3 ”), such transformations could be performed automatically after a discourse parsing step. When inspecting the student essays corpus, we further observed that an EAU mining step should involve coreference resolution to better capture relations between EAUs that involve anaphors (e.g., “Exercising makes you feel better” and “It INLINEFORM4 increases endorphin levels”). Thus, in order to conduct real-world end-to-end argumentation relation mining for a given topic, we envision a system that addresses three steps: (i) mining of EAUs and (ii) replacement of pronouns in EAUs with referenced entities (e.g., INLINEFORM0 ). Finally (iii), given the cross product of mined EAUs we can apply a model of type INLINEFORM1 to construct a full-fledged argumentation graph, possibly spanning multiple documents. We have shown that in order to properly perform step (iii), we need stronger models that are able to better model EAU contents. Hence, we encourage the argumentation community to test their systems on a decontextualized version of the student essays, including the proposed – and possibly further extended – testing setups, to challenge the semantic representation and reasoning capacities of argument analysis models. This will lead to more realistic performance estimates and increased robustness of systems when addressing desirable multi-document tasks. Conclusion We have shown that systems which put too much focus on discourse information may be easily fooled – an issue which has severe implications when systems are applied to cross-document argumentative relation classification tasks. The strong reliance on contextual clues is also problematic in single-document contexts, where systems can run a risk of assigning relation labels relying on contextual and rhetorical effects – instead of focusing on content. Hence, we propose that researchers test their argumentative relation classification systems on two alternative versions of the StudentEssay data that reflect different access levels. (i) EAU-span only, where systems only see the EAU spans and (ii) context-only, where systems can only see the EAU-surrounding context. These complementary settings will (i) challenge the semantic capacities of a system, and (ii) unveil the extent to which a system is focusing on the discourse context when making decisions. We will offer our testing environments to the research community through a platform that provides datasets and scripts and a table to trace the results of content-based systems. Acknowledgments This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant no. GRK 1994/1 and by the Leibniz ScienceCampus “Empirical Linguistics and Computational Language Modeling”, supported by the Leibniz Association under grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-Württemberg.
Unanswerable
fb2593de1f5cc632724e39d92e4dd82477f06ea1
fb2593de1f5cc632724e39d92e4dd82477f06ea1_0
Q: How do they demonstrate the robustness of their results? Text: Introduction In recent years we have witnessed a great surge in activity in the area of computational argument analysis (e.g. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 ), and the emergence of dedicated venues such as the ACL Argument Mining workshop series starting in 2014 BIBREF4 . Argumentative relation classification is a sub-task of argument analysis that aims to determine relations between argumentative units A and B, for example, A supports B; A attacks B. Consider the following argumentative units (1) and (2), given the topic (0) “Marijuana should be legalized”: This example is modeled in Figure FIGREF3 . It is clear that (1) has a negative stance towards the topic and (2) has a positive stance towards the topic. Moreover, we can say that (2) attacks (1). In discourse, such a relation is often made explicit through discourse markers: (1). However, (2); On the one hand (1), on the other (2); (1), although (2); Admittedly, (2); etc. In the absence of such markers we must determine this relation by assessing the semantics of the individual argumentative units, including (often implicit) world knowledge about how they are related to each other. In this work, we show that argumentative relation classifiers – when provided with textual context surrounding an argumentative unit's span – are very prone to neglect the actual textual content of the EAU span. Instead they heavily rely on contextual markers, such as conjunctions or adverbials, as a basis for prediction. We argue that a system's capacity of predicting the correct relation based on the argumentative units' content is important in many circumstances, e.g., when an argumentative debate crosses document boundaries. For example, the prohibition of marijuana debate extends across populations and countries – argumentative units for this debate can be recovered from thousands of documents scattered across the world wide web. As a consequence, argumentative relation classification systems should not be (immensely) dependent on contextual clues – in the discussed cross-document setting these clues may even be misleading for such a system, since source and target arguments can be embedded in different textual contexts (e.g., when (1) and (2) stem from different documents it is easy to imagine a textual context where (2) is not introduced by however but instead by an `inverse' form such as e.g. moreover). Related Work It is well-known that the rhetorical and argumentative structure of texts bear great similarities. For example, BIBREF5 , BIBREF6 , BIBREF0 observe that elementary discourse units (EDUs) in RST BIBREF7 share great similarity with elementary argumentative units (EAUs) in argumentation analysis. BIBREF8 experiment with a modified version of the Microtext corpus BIBREF9 , which is an extensively annotated albeit small corpus. Similar to us, they separate argumentative units from discursive contextual markers. While BIBREF8 conduct a human evaluation to investigate the separation of Logos and Pathos aspects of arguments, our work investigates how (de-)contextualization of argumentative units affects automatic argumentative relation classification models. Argumentative Relation Prediction: Models and Features In this section, we describe different formulations of the argumentative relation classification task and describe features used by our replicated model. In order to test our hypotheses, we propose to group all features into three distinct types. Models Now, we introduce a classification of three different prediction models used in the argumentative relation prediction literature. We will inspect all of them and show that all can suffer from severe issues when focusing (too much) on the context. The model INLINEFORM0 adopts a discourse parsing view on argumentative relation prediction and predicts one outgoing edge for an argumentative unit (one-outgoing edge). Model INLINEFORM1 assumes a connected graph with argumentative units and is tasked with predicting edge labels for unit tuples (labeling relations in a graph). Finally, a model INLINEFORM2 is given two (possibly) unrelated argumentative units and is tasked with predicting connections as well as edge labels (joint edge prediction and labeling). BIBREF13 divide the task into relation prediction INLINEFORM0 and relation class assignment INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM0 which the authors describe as argumentative relation identification ( INLINEFORM0 ) and stance detection ( INLINEFORM1 ). In their experiments, INLINEFORM2 , i.e., no distinction is made between features that access only the argument content (EAU span) or only the EAU's embedding context, and some features also consider both (e.g., discourse features). This model adopts a parsing view on argumentative relation classification: every unit is allowed to have only one type of outgoing relation (this follows trivially from the fact that INLINEFORM3 has only one input). Applying such a model to argumentative attack and support relations might impose unrealistic constraints on the resulting argumentation graph: A given premise might in fact attack or support several other premises. The approach may suffice for the case of student argumentative essays, where EAUs are well-framed in a discourse structure, but seems overly restrictive for many other scenarios. Another way of framing the task, is to learn a function DISPLAYFORM0 Here, an argumentative unit is allowed to be in a attack or support relation to multiple other EAUs. Yet, both INLINEFORM0 and INLINEFORM1 assume that inputs are already linked and only the class of the link is unknown. Thus, we might also model the task in a three-class classification setting to learn a more general function that performs relation prediction and classification jointly (see also, e.g., BIBREF10 ): DISPLAYFORM0 The model described by Eq. EQREF22 is the most general one: not only does it assume a graph view on argumentative units and their relations (as does Eq. EQREF20 ); in model formulation (Eq. EQREF22 ), an argumentative unit can have no or multiple support or attack relations. It naturally allows for cases where an argumentative unit INLINEFORM0 (supports INLINEFORM1 INLINEFORM2 attacks INLINEFORM3 INLINEFORM4 is-unrelated-to INLINEFORM5 ). Given a set of EAUs mined from different documents, this model enables us to construct a full-fledged argumentation graph. Feature implementation Our feature implementation follows the feature descriptions for Stance recognition and link identification in BIBREF13 . These features and variations of them have been used successfully in several successive works (cf. BIBREF1 , BIBREF16 , BIBREF15 ). For any model the features are indexed by INLINEFORM0 . We create a function INLINEFORM1 which maps from feature indices to feature types. In other words, INLINEFORM2 tells us, for any given feature, whether it is content-based ( INLINEFORM3 ), content-ignorant ( INLINEFORM4 ) or full access ( INLINEFORM5 ). The features for, e.g., the joint prediction model INLINEFORM6 of type INLINEFORM7 ( INLINEFORM8 ) can then simply be described as INLINEFORM9 . Recall that features computed on the basis of the EAU span are content-based ( INLINEFORM10 ), features from the EAU-surrounding text are content-ignorant ( INLINEFORM11 ) and features computed from both are denoted by full-access ( INLINEFORM12 ). Details on the extraction of features are provided below. These consist of boolean values indicating whether a certain word appears in the argumentative source or target EAU or both (and separately, their contexts). More precisely, for any classification instance we extract uni-grams from within the span of the EAU (if INLINEFORM0 ) or solely from the sentence-context surrounding the EAUs (if INLINEFORM1 ). Words which occur in both bags are only visible in the full-access setup INLINEFORM2 and are modeled as binary indicators. Such features consist of syntactic production rules extracted from constituency trees – they are modelled analogously to the lexical features as a bag of production rules. To make a clear division between features derived from the EAU embedding context and features derived from within the EAU span, we divide the constituency tree in two parts, as is illustrated in Figure FIGREF26 . If the EAU is embedded in a covering sentence, we cut the syntax tree at the corresponding edge ( in Figure FIGREF26 ). In this example, the content-ignorant ( INLINEFORM0 ) bag-of-word production rule representation includes the rules INLINEFORM1 and INLINEFORM2 . Analogously to the lexical features, the production rules are modeled as binary indicator features. These features describe shallow statistics such as the ratio of argumentative unit tokens compared to sentence tokens or the position of the argumentative unit in the paragraph. We set these features to zero for the content representation of the argumentative unit and replicate those features that allow us to treat the argumentative unit as a black-box. For example, in the content-based ( INLINEFORM0 ) system that has access only to the EAU, we can compute the #tokens in the EAU, but not the #tokens in EAU divided by #tokens in the sentence. The latter feature is only accessible in the full access system variants. Hence, in the content-based ( INLINEFORM1 ) system most of these statistics are set to zero since they cannot be computed by considering only the EAU span. For the content-based representation we retrieve only discourse relations that are confined within the span of the argumentative unit. In the very frequent case that discourse features cross the boundaries of embedding context and EAU span, we only take them into account for INLINEFORM0 . We use the element-wise sum of 300-dimensional pre-trained GloVe vectors BIBREF24 corresponding to the words within the EAU span ( INLINEFORM0 ) and the words of the EAU-surrounding context ( INLINEFORM1 ). Additionally, we compute the element-wise subtraction of the source EAU vector from the target EAU vector, with the aim of modelling directions in distributional space, similarly to BIBREF25 . Words with no corresponding pre-trained word vector and empty sequences (e.g., no preceding context available) are treated as a zero-vector. Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors. Results Our first step towards our main experiments is to replicate the competitive argumentative relation classifier of BIBREF13 , BIBREF1 . Hence, for comparison purposes, we first formulate the task exactly as it was done in this prior work, using the model formulation in Eq. EQREF17 , which determines the type of outgoing edge from a source (i.e., tree-like view). The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features. The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 ). At first glance, the results of the purely EAU focused systems ( INLINEFORM0 ) are disappointing, since they fall far behind their competitors. On the other hand, their F1 scores are not devastatingly bad. The strong most-frequent-class baseline is significantly outperformed by the content-based ( INLINEFORM1 ) system, across all three prediction settings. In summary our findings are as follows: (i) models which see the EAU span (content-based, INLINEFORM0 ) are significantly outperformed by models that have no access to the span itself (content-ignorant, INLINEFORM1 ) across all settings; (ii) in two of three prediction settings ( INLINEFORM2 and INLINEFORM3 ), the model which only has access to the context even outperforms the model that has access to all information in the input. The fact that using features derived exclusively from the EAU embedding context ( INLINEFORM4 ) can lead to better results than using a full feature-system ( INLINEFORM5 ) suggests that some information from the EAU can even be harmful. Why this is the case, we cannot answer exactly. A plausible cause might be related to the smaller dimension of the feature space, which makes the SVM less likely to overfit. Still, this finding comes as a surprise and calls for further investigation in future work. A system for argumentative relation classification can be applied in one of two settings: single-document or cross-document, as illustrated in Figure FIGREF42 : in the first case (top), a system is tasked to classify EAUs that appear linearly in one document – here contextual clues can often highlight the relationship between two units. This is the setting we have been considering up to now. However, in the second scenario (bottom), we have moved away from the closed single-document setting and ask the system to classify two EAUs extracted from different document contexts. This setting applies, for instance, when we are mining arguments from multiple sources. In both cases, however, a system that relies more on contextual clues than on the content expressed in the EAUs is problematic: in the single-document setting, such a system will rely on discourse indicators – whether or not they are justified by content – and can thus easily be fooled. In the cross-document setting, discourse-based indicators – being inherently defined with respect to their internal document context – do not have a defined rhetorical function with respect to EAUs in a separate document and thus a system that has learned to rely on such markers within a single-document setting can be seriously misled. We believe that the cross-document setting should be an important goal in argumentation analysis, since it generalizes better to many debates of interest, where EAUs can be found scattered across thousands of documents. For example, for the topic of legalizing marijuana, EAUs may be mined from millions of documents and thus their relations may naturally extend across document boundaries. If a system learns to over-proportionally attend to the EAUs' surrounding contexts it is prone to making many errors. In what follows we are simulating the effects that an overly context-sensitive classifier could have in a cross-document setting, by modifying our experimental setting, and study the effects on the different model types: In one setup – we call it randomized-context – we systematically distort the context of our testing instances by exchanging the context in a randomized manner; in the other setting – called no-context, we are deleting the context around the ADUs to be classified. Randomized-context simulates an open world debate where argumentative units may occur in different contexts, sometimes with discourse markers indicating an opposite class. In other words, in this setting we want to examine effects when porting a context-sensitive system to a multi-document setting. For example, as seen in Figure FIGREF42 , the context of an argumentative unit may change from “However” to “Moreover” – which can happen naturally in open debates. The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model. We calculate the ANOVA classification F scores of the features with respect to our three task formulations INLINEFORM0 and INLINEFORM1 . The F percentiles of features extracted from the EAU surrounding text ( INLINEFORM2 ) and features extracted from the EAU span ( INLINEFORM3 ), are displayed in Figure FIGREF50 . It clearly stands out that features obtained from the EAU surrounding context ( INLINEFORM0 ) are assigned much higher scores compared to features stemming from the EAU span ( INLINEFORM1 ). This holds true for all three task formulations and provides further evidence that models – when given the option – put a strong focus on contextual clues while neglecting the information provided by the EAU span itself. Discussion While competitive systems for argumentative relation classification are considered to be robust, our experiments have shown that despite confidence-inspiring scores on unseen testing data, such systems can easily be fooled – they can deliver strong performance scores although the classifier does not have access to the content of the EAUs. In this respect, we have provided evidence that there is a danger in case models focus too much on rhetorical indicators, in detriment of the context. Thus, the following question arises: How can we prevent argumentation models from modeling arguments or argumentative units and their relations in overly naïve ways? A simple and intuitive way is to dissect EAUs from their surrounding document context. Models trained on data that is restricted to the EAUs' content will be forced to focus on the content of EAUs. We believe that this will enhance the robustness of such models and allows them to generalize to cross-document argument relation classification. The corpus of student essays makes such transformations straightforward: only the EAUs were annotated (e.g., “However, INLINEFORM0 A INLINEFORM1 ”). If annotations extend over the EAUs (e.g., only full sentences are annotated, “ INLINEFORM2 However, A INLINEFORM3 ”), such transformations could be performed automatically after a discourse parsing step. When inspecting the student essays corpus, we further observed that an EAU mining step should involve coreference resolution to better capture relations between EAUs that involve anaphors (e.g., “Exercising makes you feel better” and “It INLINEFORM4 increases endorphin levels”). Thus, in order to conduct real-world end-to-end argumentation relation mining for a given topic, we envision a system that addresses three steps: (i) mining of EAUs and (ii) replacement of pronouns in EAUs with referenced entities (e.g., INLINEFORM0 ). Finally (iii), given the cross product of mined EAUs we can apply a model of type INLINEFORM1 to construct a full-fledged argumentation graph, possibly spanning multiple documents. We have shown that in order to properly perform step (iii), we need stronger models that are able to better model EAU contents. Hence, we encourage the argumentation community to test their systems on a decontextualized version of the student essays, including the proposed – and possibly further extended – testing setups, to challenge the semantic representation and reasoning capacities of argument analysis models. This will lead to more realistic performance estimates and increased robustness of systems when addressing desirable multi-document tasks. Conclusion We have shown that systems which put too much focus on discourse information may be easily fooled – an issue which has severe implications when systems are applied to cross-document argumentative relation classification tasks. The strong reliance on contextual clues is also problematic in single-document contexts, where systems can run a risk of assigning relation labels relying on contextual and rhetorical effects – instead of focusing on content. Hence, we propose that researchers test their argumentative relation classification systems on two alternative versions of the StudentEssay data that reflect different access levels. (i) EAU-span only, where systems only see the EAU spans and (ii) context-only, where systems can only see the EAU-surrounding context. These complementary settings will (i) challenge the semantic capacities of a system, and (ii) unveil the extent to which a system is focusing on the discourse context when making decisions. We will offer our testing environments to the research community through a platform that provides datasets and scripts and a table to trace the results of content-based systems. Acknowledgments This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant no. GRK 1994/1 and by the Leibniz ScienceCampus “Empirical Linguistics and Computational Language Modeling”, supported by the Leibniz Association under grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-Württemberg.
performances of a purely content-based model naturally stays stable
476d0b5579deb9199423bb843e584e606d606bc7
476d0b5579deb9199423bb843e584e606d606bc7_0
Q: What baseline and classification systems are used in experiments? Text: Introduction In recent years we have witnessed a great surge in activity in the area of computational argument analysis (e.g. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 ), and the emergence of dedicated venues such as the ACL Argument Mining workshop series starting in 2014 BIBREF4 . Argumentative relation classification is a sub-task of argument analysis that aims to determine relations between argumentative units A and B, for example, A supports B; A attacks B. Consider the following argumentative units (1) and (2), given the topic (0) “Marijuana should be legalized”: This example is modeled in Figure FIGREF3 . It is clear that (1) has a negative stance towards the topic and (2) has a positive stance towards the topic. Moreover, we can say that (2) attacks (1). In discourse, such a relation is often made explicit through discourse markers: (1). However, (2); On the one hand (1), on the other (2); (1), although (2); Admittedly, (2); etc. In the absence of such markers we must determine this relation by assessing the semantics of the individual argumentative units, including (often implicit) world knowledge about how they are related to each other. In this work, we show that argumentative relation classifiers – when provided with textual context surrounding an argumentative unit's span – are very prone to neglect the actual textual content of the EAU span. Instead they heavily rely on contextual markers, such as conjunctions or adverbials, as a basis for prediction. We argue that a system's capacity of predicting the correct relation based on the argumentative units' content is important in many circumstances, e.g., when an argumentative debate crosses document boundaries. For example, the prohibition of marijuana debate extends across populations and countries – argumentative units for this debate can be recovered from thousands of documents scattered across the world wide web. As a consequence, argumentative relation classification systems should not be (immensely) dependent on contextual clues – in the discussed cross-document setting these clues may even be misleading for such a system, since source and target arguments can be embedded in different textual contexts (e.g., when (1) and (2) stem from different documents it is easy to imagine a textual context where (2) is not introduced by however but instead by an `inverse' form such as e.g. moreover). Related Work It is well-known that the rhetorical and argumentative structure of texts bear great similarities. For example, BIBREF5 , BIBREF6 , BIBREF0 observe that elementary discourse units (EDUs) in RST BIBREF7 share great similarity with elementary argumentative units (EAUs) in argumentation analysis. BIBREF8 experiment with a modified version of the Microtext corpus BIBREF9 , which is an extensively annotated albeit small corpus. Similar to us, they separate argumentative units from discursive contextual markers. While BIBREF8 conduct a human evaluation to investigate the separation of Logos and Pathos aspects of arguments, our work investigates how (de-)contextualization of argumentative units affects automatic argumentative relation classification models. Argumentative Relation Prediction: Models and Features In this section, we describe different formulations of the argumentative relation classification task and describe features used by our replicated model. In order to test our hypotheses, we propose to group all features into three distinct types. Models Now, we introduce a classification of three different prediction models used in the argumentative relation prediction literature. We will inspect all of them and show that all can suffer from severe issues when focusing (too much) on the context. The model INLINEFORM0 adopts a discourse parsing view on argumentative relation prediction and predicts one outgoing edge for an argumentative unit (one-outgoing edge). Model INLINEFORM1 assumes a connected graph with argumentative units and is tasked with predicting edge labels for unit tuples (labeling relations in a graph). Finally, a model INLINEFORM2 is given two (possibly) unrelated argumentative units and is tasked with predicting connections as well as edge labels (joint edge prediction and labeling). BIBREF13 divide the task into relation prediction INLINEFORM0 and relation class assignment INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM0 which the authors describe as argumentative relation identification ( INLINEFORM0 ) and stance detection ( INLINEFORM1 ). In their experiments, INLINEFORM2 , i.e., no distinction is made between features that access only the argument content (EAU span) or only the EAU's embedding context, and some features also consider both (e.g., discourse features). This model adopts a parsing view on argumentative relation classification: every unit is allowed to have only one type of outgoing relation (this follows trivially from the fact that INLINEFORM3 has only one input). Applying such a model to argumentative attack and support relations might impose unrealistic constraints on the resulting argumentation graph: A given premise might in fact attack or support several other premises. The approach may suffice for the case of student argumentative essays, where EAUs are well-framed in a discourse structure, but seems overly restrictive for many other scenarios. Another way of framing the task, is to learn a function DISPLAYFORM0 Here, an argumentative unit is allowed to be in a attack or support relation to multiple other EAUs. Yet, both INLINEFORM0 and INLINEFORM1 assume that inputs are already linked and only the class of the link is unknown. Thus, we might also model the task in a three-class classification setting to learn a more general function that performs relation prediction and classification jointly (see also, e.g., BIBREF10 ): DISPLAYFORM0 The model described by Eq. EQREF22 is the most general one: not only does it assume a graph view on argumentative units and their relations (as does Eq. EQREF20 ); in model formulation (Eq. EQREF22 ), an argumentative unit can have no or multiple support or attack relations. It naturally allows for cases where an argumentative unit INLINEFORM0 (supports INLINEFORM1 INLINEFORM2 attacks INLINEFORM3 INLINEFORM4 is-unrelated-to INLINEFORM5 ). Given a set of EAUs mined from different documents, this model enables us to construct a full-fledged argumentation graph. Feature implementation Our feature implementation follows the feature descriptions for Stance recognition and link identification in BIBREF13 . These features and variations of them have been used successfully in several successive works (cf. BIBREF1 , BIBREF16 , BIBREF15 ). For any model the features are indexed by INLINEFORM0 . We create a function INLINEFORM1 which maps from feature indices to feature types. In other words, INLINEFORM2 tells us, for any given feature, whether it is content-based ( INLINEFORM3 ), content-ignorant ( INLINEFORM4 ) or full access ( INLINEFORM5 ). The features for, e.g., the joint prediction model INLINEFORM6 of type INLINEFORM7 ( INLINEFORM8 ) can then simply be described as INLINEFORM9 . Recall that features computed on the basis of the EAU span are content-based ( INLINEFORM10 ), features from the EAU-surrounding text are content-ignorant ( INLINEFORM11 ) and features computed from both are denoted by full-access ( INLINEFORM12 ). Details on the extraction of features are provided below. These consist of boolean values indicating whether a certain word appears in the argumentative source or target EAU or both (and separately, their contexts). More precisely, for any classification instance we extract uni-grams from within the span of the EAU (if INLINEFORM0 ) or solely from the sentence-context surrounding the EAUs (if INLINEFORM1 ). Words which occur in both bags are only visible in the full-access setup INLINEFORM2 and are modeled as binary indicators. Such features consist of syntactic production rules extracted from constituency trees – they are modelled analogously to the lexical features as a bag of production rules. To make a clear division between features derived from the EAU embedding context and features derived from within the EAU span, we divide the constituency tree in two parts, as is illustrated in Figure FIGREF26 . If the EAU is embedded in a covering sentence, we cut the syntax tree at the corresponding edge ( in Figure FIGREF26 ). In this example, the content-ignorant ( INLINEFORM0 ) bag-of-word production rule representation includes the rules INLINEFORM1 and INLINEFORM2 . Analogously to the lexical features, the production rules are modeled as binary indicator features. These features describe shallow statistics such as the ratio of argumentative unit tokens compared to sentence tokens or the position of the argumentative unit in the paragraph. We set these features to zero for the content representation of the argumentative unit and replicate those features that allow us to treat the argumentative unit as a black-box. For example, in the content-based ( INLINEFORM0 ) system that has access only to the EAU, we can compute the #tokens in the EAU, but not the #tokens in EAU divided by #tokens in the sentence. The latter feature is only accessible in the full access system variants. Hence, in the content-based ( INLINEFORM1 ) system most of these statistics are set to zero since they cannot be computed by considering only the EAU span. For the content-based representation we retrieve only discourse relations that are confined within the span of the argumentative unit. In the very frequent case that discourse features cross the boundaries of embedding context and EAU span, we only take them into account for INLINEFORM0 . We use the element-wise sum of 300-dimensional pre-trained GloVe vectors BIBREF24 corresponding to the words within the EAU span ( INLINEFORM0 ) and the words of the EAU-surrounding context ( INLINEFORM1 ). Additionally, we compute the element-wise subtraction of the source EAU vector from the target EAU vector, with the aim of modelling directions in distributional space, similarly to BIBREF25 . Words with no corresponding pre-trained word vector and empty sequences (e.g., no preceding context available) are treated as a zero-vector. Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors. Results Our first step towards our main experiments is to replicate the competitive argumentative relation classifier of BIBREF13 , BIBREF1 . Hence, for comparison purposes, we first formulate the task exactly as it was done in this prior work, using the model formulation in Eq. EQREF17 , which determines the type of outgoing edge from a source (i.e., tree-like view). The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features. The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 ). At first glance, the results of the purely EAU focused systems ( INLINEFORM0 ) are disappointing, since they fall far behind their competitors. On the other hand, their F1 scores are not devastatingly bad. The strong most-frequent-class baseline is significantly outperformed by the content-based ( INLINEFORM1 ) system, across all three prediction settings. In summary our findings are as follows: (i) models which see the EAU span (content-based, INLINEFORM0 ) are significantly outperformed by models that have no access to the span itself (content-ignorant, INLINEFORM1 ) across all settings; (ii) in two of three prediction settings ( INLINEFORM2 and INLINEFORM3 ), the model which only has access to the context even outperforms the model that has access to all information in the input. The fact that using features derived exclusively from the EAU embedding context ( INLINEFORM4 ) can lead to better results than using a full feature-system ( INLINEFORM5 ) suggests that some information from the EAU can even be harmful. Why this is the case, we cannot answer exactly. A plausible cause might be related to the smaller dimension of the feature space, which makes the SVM less likely to overfit. Still, this finding comes as a surprise and calls for further investigation in future work. A system for argumentative relation classification can be applied in one of two settings: single-document or cross-document, as illustrated in Figure FIGREF42 : in the first case (top), a system is tasked to classify EAUs that appear linearly in one document – here contextual clues can often highlight the relationship between two units. This is the setting we have been considering up to now. However, in the second scenario (bottom), we have moved away from the closed single-document setting and ask the system to classify two EAUs extracted from different document contexts. This setting applies, for instance, when we are mining arguments from multiple sources. In both cases, however, a system that relies more on contextual clues than on the content expressed in the EAUs is problematic: in the single-document setting, such a system will rely on discourse indicators – whether or not they are justified by content – and can thus easily be fooled. In the cross-document setting, discourse-based indicators – being inherently defined with respect to their internal document context – do not have a defined rhetorical function with respect to EAUs in a separate document and thus a system that has learned to rely on such markers within a single-document setting can be seriously misled. We believe that the cross-document setting should be an important goal in argumentation analysis, since it generalizes better to many debates of interest, where EAUs can be found scattered across thousands of documents. For example, for the topic of legalizing marijuana, EAUs may be mined from millions of documents and thus their relations may naturally extend across document boundaries. If a system learns to over-proportionally attend to the EAUs' surrounding contexts it is prone to making many errors. In what follows we are simulating the effects that an overly context-sensitive classifier could have in a cross-document setting, by modifying our experimental setting, and study the effects on the different model types: In one setup – we call it randomized-context – we systematically distort the context of our testing instances by exchanging the context in a randomized manner; in the other setting – called no-context, we are deleting the context around the ADUs to be classified. Randomized-context simulates an open world debate where argumentative units may occur in different contexts, sometimes with discourse markers indicating an opposite class. In other words, in this setting we want to examine effects when porting a context-sensitive system to a multi-document setting. For example, as seen in Figure FIGREF42 , the context of an argumentative unit may change from “However” to “Moreover” – which can happen naturally in open debates. The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model. We calculate the ANOVA classification F scores of the features with respect to our three task formulations INLINEFORM0 and INLINEFORM1 . The F percentiles of features extracted from the EAU surrounding text ( INLINEFORM2 ) and features extracted from the EAU span ( INLINEFORM3 ), are displayed in Figure FIGREF50 . It clearly stands out that features obtained from the EAU surrounding context ( INLINEFORM0 ) are assigned much higher scores compared to features stemming from the EAU span ( INLINEFORM1 ). This holds true for all three task formulations and provides further evidence that models – when given the option – put a strong focus on contextual clues while neglecting the information provided by the EAU span itself. Discussion While competitive systems for argumentative relation classification are considered to be robust, our experiments have shown that despite confidence-inspiring scores on unseen testing data, such systems can easily be fooled – they can deliver strong performance scores although the classifier does not have access to the content of the EAUs. In this respect, we have provided evidence that there is a danger in case models focus too much on rhetorical indicators, in detriment of the context. Thus, the following question arises: How can we prevent argumentation models from modeling arguments or argumentative units and their relations in overly naïve ways? A simple and intuitive way is to dissect EAUs from their surrounding document context. Models trained on data that is restricted to the EAUs' content will be forced to focus on the content of EAUs. We believe that this will enhance the robustness of such models and allows them to generalize to cross-document argument relation classification. The corpus of student essays makes such transformations straightforward: only the EAUs were annotated (e.g., “However, INLINEFORM0 A INLINEFORM1 ”). If annotations extend over the EAUs (e.g., only full sentences are annotated, “ INLINEFORM2 However, A INLINEFORM3 ”), such transformations could be performed automatically after a discourse parsing step. When inspecting the student essays corpus, we further observed that an EAU mining step should involve coreference resolution to better capture relations between EAUs that involve anaphors (e.g., “Exercising makes you feel better” and “It INLINEFORM4 increases endorphin levels”). Thus, in order to conduct real-world end-to-end argumentation relation mining for a given topic, we envision a system that addresses three steps: (i) mining of EAUs and (ii) replacement of pronouns in EAUs with referenced entities (e.g., INLINEFORM0 ). Finally (iii), given the cross product of mined EAUs we can apply a model of type INLINEFORM1 to construct a full-fledged argumentation graph, possibly spanning multiple documents. We have shown that in order to properly perform step (iii), we need stronger models that are able to better model EAU contents. Hence, we encourage the argumentation community to test their systems on a decontextualized version of the student essays, including the proposed – and possibly further extended – testing setups, to challenge the semantic representation and reasoning capacities of argument analysis models. This will lead to more realistic performance estimates and increased robustness of systems when addressing desirable multi-document tasks. Conclusion We have shown that systems which put too much focus on discourse information may be easily fooled – an issue which has severe implications when systems are applied to cross-document argumentative relation classification tasks. The strong reliance on contextual clues is also problematic in single-document contexts, where systems can run a risk of assigning relation labels relying on contextual and rhetorical effects – instead of focusing on content. Hence, we propose that researchers test their argumentative relation classification systems on two alternative versions of the StudentEssay data that reflect different access levels. (i) EAU-span only, where systems only see the EAU spans and (ii) context-only, where systems can only see the EAU-surrounding context. These complementary settings will (i) challenge the semantic capacities of a system, and (ii) unveil the extent to which a system is focusing on the discourse context when making decisions. We will offer our testing environments to the research community through a platform that provides datasets and scripts and a table to trace the results of content-based systems. Acknowledgments This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant no. GRK 1994/1 and by the Leibniz ScienceCampus “Empirical Linguistics and Computational Language Modeling”, supported by the Leibniz Association under grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-Württemberg.
BIBREF13, majority baseline
eddabb24bc6de6451bcdaa7940f708e925010912
eddabb24bc6de6451bcdaa7940f708e925010912_0
Q: How are the EAU text spans annotated? Text: Introduction In recent years we have witnessed a great surge in activity in the area of computational argument analysis (e.g. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 ), and the emergence of dedicated venues such as the ACL Argument Mining workshop series starting in 2014 BIBREF4 . Argumentative relation classification is a sub-task of argument analysis that aims to determine relations between argumentative units A and B, for example, A supports B; A attacks B. Consider the following argumentative units (1) and (2), given the topic (0) “Marijuana should be legalized”: This example is modeled in Figure FIGREF3 . It is clear that (1) has a negative stance towards the topic and (2) has a positive stance towards the topic. Moreover, we can say that (2) attacks (1). In discourse, such a relation is often made explicit through discourse markers: (1). However, (2); On the one hand (1), on the other (2); (1), although (2); Admittedly, (2); etc. In the absence of such markers we must determine this relation by assessing the semantics of the individual argumentative units, including (often implicit) world knowledge about how they are related to each other. In this work, we show that argumentative relation classifiers – when provided with textual context surrounding an argumentative unit's span – are very prone to neglect the actual textual content of the EAU span. Instead they heavily rely on contextual markers, such as conjunctions or adverbials, as a basis for prediction. We argue that a system's capacity of predicting the correct relation based on the argumentative units' content is important in many circumstances, e.g., when an argumentative debate crosses document boundaries. For example, the prohibition of marijuana debate extends across populations and countries – argumentative units for this debate can be recovered from thousands of documents scattered across the world wide web. As a consequence, argumentative relation classification systems should not be (immensely) dependent on contextual clues – in the discussed cross-document setting these clues may even be misleading for such a system, since source and target arguments can be embedded in different textual contexts (e.g., when (1) and (2) stem from different documents it is easy to imagine a textual context where (2) is not introduced by however but instead by an `inverse' form such as e.g. moreover). Related Work It is well-known that the rhetorical and argumentative structure of texts bear great similarities. For example, BIBREF5 , BIBREF6 , BIBREF0 observe that elementary discourse units (EDUs) in RST BIBREF7 share great similarity with elementary argumentative units (EAUs) in argumentation analysis. BIBREF8 experiment with a modified version of the Microtext corpus BIBREF9 , which is an extensively annotated albeit small corpus. Similar to us, they separate argumentative units from discursive contextual markers. While BIBREF8 conduct a human evaluation to investigate the separation of Logos and Pathos aspects of arguments, our work investigates how (de-)contextualization of argumentative units affects automatic argumentative relation classification models. Argumentative Relation Prediction: Models and Features In this section, we describe different formulations of the argumentative relation classification task and describe features used by our replicated model. In order to test our hypotheses, we propose to group all features into three distinct types. Models Now, we introduce a classification of three different prediction models used in the argumentative relation prediction literature. We will inspect all of them and show that all can suffer from severe issues when focusing (too much) on the context. The model INLINEFORM0 adopts a discourse parsing view on argumentative relation prediction and predicts one outgoing edge for an argumentative unit (one-outgoing edge). Model INLINEFORM1 assumes a connected graph with argumentative units and is tasked with predicting edge labels for unit tuples (labeling relations in a graph). Finally, a model INLINEFORM2 is given two (possibly) unrelated argumentative units and is tasked with predicting connections as well as edge labels (joint edge prediction and labeling). BIBREF13 divide the task into relation prediction INLINEFORM0 and relation class assignment INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM0 which the authors describe as argumentative relation identification ( INLINEFORM0 ) and stance detection ( INLINEFORM1 ). In their experiments, INLINEFORM2 , i.e., no distinction is made between features that access only the argument content (EAU span) or only the EAU's embedding context, and some features also consider both (e.g., discourse features). This model adopts a parsing view on argumentative relation classification: every unit is allowed to have only one type of outgoing relation (this follows trivially from the fact that INLINEFORM3 has only one input). Applying such a model to argumentative attack and support relations might impose unrealistic constraints on the resulting argumentation graph: A given premise might in fact attack or support several other premises. The approach may suffice for the case of student argumentative essays, where EAUs are well-framed in a discourse structure, but seems overly restrictive for many other scenarios. Another way of framing the task, is to learn a function DISPLAYFORM0 Here, an argumentative unit is allowed to be in a attack or support relation to multiple other EAUs. Yet, both INLINEFORM0 and INLINEFORM1 assume that inputs are already linked and only the class of the link is unknown. Thus, we might also model the task in a three-class classification setting to learn a more general function that performs relation prediction and classification jointly (see also, e.g., BIBREF10 ): DISPLAYFORM0 The model described by Eq. EQREF22 is the most general one: not only does it assume a graph view on argumentative units and their relations (as does Eq. EQREF20 ); in model formulation (Eq. EQREF22 ), an argumentative unit can have no or multiple support or attack relations. It naturally allows for cases where an argumentative unit INLINEFORM0 (supports INLINEFORM1 INLINEFORM2 attacks INLINEFORM3 INLINEFORM4 is-unrelated-to INLINEFORM5 ). Given a set of EAUs mined from different documents, this model enables us to construct a full-fledged argumentation graph. Feature implementation Our feature implementation follows the feature descriptions for Stance recognition and link identification in BIBREF13 . These features and variations of them have been used successfully in several successive works (cf. BIBREF1 , BIBREF16 , BIBREF15 ). For any model the features are indexed by INLINEFORM0 . We create a function INLINEFORM1 which maps from feature indices to feature types. In other words, INLINEFORM2 tells us, for any given feature, whether it is content-based ( INLINEFORM3 ), content-ignorant ( INLINEFORM4 ) or full access ( INLINEFORM5 ). The features for, e.g., the joint prediction model INLINEFORM6 of type INLINEFORM7 ( INLINEFORM8 ) can then simply be described as INLINEFORM9 . Recall that features computed on the basis of the EAU span are content-based ( INLINEFORM10 ), features from the EAU-surrounding text are content-ignorant ( INLINEFORM11 ) and features computed from both are denoted by full-access ( INLINEFORM12 ). Details on the extraction of features are provided below. These consist of boolean values indicating whether a certain word appears in the argumentative source or target EAU or both (and separately, their contexts). More precisely, for any classification instance we extract uni-grams from within the span of the EAU (if INLINEFORM0 ) or solely from the sentence-context surrounding the EAUs (if INLINEFORM1 ). Words which occur in both bags are only visible in the full-access setup INLINEFORM2 and are modeled as binary indicators. Such features consist of syntactic production rules extracted from constituency trees – they are modelled analogously to the lexical features as a bag of production rules. To make a clear division between features derived from the EAU embedding context and features derived from within the EAU span, we divide the constituency tree in two parts, as is illustrated in Figure FIGREF26 . If the EAU is embedded in a covering sentence, we cut the syntax tree at the corresponding edge ( in Figure FIGREF26 ). In this example, the content-ignorant ( INLINEFORM0 ) bag-of-word production rule representation includes the rules INLINEFORM1 and INLINEFORM2 . Analogously to the lexical features, the production rules are modeled as binary indicator features. These features describe shallow statistics such as the ratio of argumentative unit tokens compared to sentence tokens or the position of the argumentative unit in the paragraph. We set these features to zero for the content representation of the argumentative unit and replicate those features that allow us to treat the argumentative unit as a black-box. For example, in the content-based ( INLINEFORM0 ) system that has access only to the EAU, we can compute the #tokens in the EAU, but not the #tokens in EAU divided by #tokens in the sentence. The latter feature is only accessible in the full access system variants. Hence, in the content-based ( INLINEFORM1 ) system most of these statistics are set to zero since they cannot be computed by considering only the EAU span. For the content-based representation we retrieve only discourse relations that are confined within the span of the argumentative unit. In the very frequent case that discourse features cross the boundaries of embedding context and EAU span, we only take them into account for INLINEFORM0 . We use the element-wise sum of 300-dimensional pre-trained GloVe vectors BIBREF24 corresponding to the words within the EAU span ( INLINEFORM0 ) and the words of the EAU-surrounding context ( INLINEFORM1 ). Additionally, we compute the element-wise subtraction of the source EAU vector from the target EAU vector, with the aim of modelling directions in distributional space, similarly to BIBREF25 . Words with no corresponding pre-trained word vector and empty sequences (e.g., no preceding context available) are treated as a zero-vector. Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors. Results Our first step towards our main experiments is to replicate the competitive argumentative relation classifier of BIBREF13 , BIBREF1 . Hence, for comparison purposes, we first formulate the task exactly as it was done in this prior work, using the model formulation in Eq. EQREF17 , which determines the type of outgoing edge from a source (i.e., tree-like view). The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features. The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 ). At first glance, the results of the purely EAU focused systems ( INLINEFORM0 ) are disappointing, since they fall far behind their competitors. On the other hand, their F1 scores are not devastatingly bad. The strong most-frequent-class baseline is significantly outperformed by the content-based ( INLINEFORM1 ) system, across all three prediction settings. In summary our findings are as follows: (i) models which see the EAU span (content-based, INLINEFORM0 ) are significantly outperformed by models that have no access to the span itself (content-ignorant, INLINEFORM1 ) across all settings; (ii) in two of three prediction settings ( INLINEFORM2 and INLINEFORM3 ), the model which only has access to the context even outperforms the model that has access to all information in the input. The fact that using features derived exclusively from the EAU embedding context ( INLINEFORM4 ) can lead to better results than using a full feature-system ( INLINEFORM5 ) suggests that some information from the EAU can even be harmful. Why this is the case, we cannot answer exactly. A plausible cause might be related to the smaller dimension of the feature space, which makes the SVM less likely to overfit. Still, this finding comes as a surprise and calls for further investigation in future work. A system for argumentative relation classification can be applied in one of two settings: single-document or cross-document, as illustrated in Figure FIGREF42 : in the first case (top), a system is tasked to classify EAUs that appear linearly in one document – here contextual clues can often highlight the relationship between two units. This is the setting we have been considering up to now. However, in the second scenario (bottom), we have moved away from the closed single-document setting and ask the system to classify two EAUs extracted from different document contexts. This setting applies, for instance, when we are mining arguments from multiple sources. In both cases, however, a system that relies more on contextual clues than on the content expressed in the EAUs is problematic: in the single-document setting, such a system will rely on discourse indicators – whether or not they are justified by content – and can thus easily be fooled. In the cross-document setting, discourse-based indicators – being inherently defined with respect to their internal document context – do not have a defined rhetorical function with respect to EAUs in a separate document and thus a system that has learned to rely on such markers within a single-document setting can be seriously misled. We believe that the cross-document setting should be an important goal in argumentation analysis, since it generalizes better to many debates of interest, where EAUs can be found scattered across thousands of documents. For example, for the topic of legalizing marijuana, EAUs may be mined from millions of documents and thus their relations may naturally extend across document boundaries. If a system learns to over-proportionally attend to the EAUs' surrounding contexts it is prone to making many errors. In what follows we are simulating the effects that an overly context-sensitive classifier could have in a cross-document setting, by modifying our experimental setting, and study the effects on the different model types: In one setup – we call it randomized-context – we systematically distort the context of our testing instances by exchanging the context in a randomized manner; in the other setting – called no-context, we are deleting the context around the ADUs to be classified. Randomized-context simulates an open world debate where argumentative units may occur in different contexts, sometimes with discourse markers indicating an opposite class. In other words, in this setting we want to examine effects when porting a context-sensitive system to a multi-document setting. For example, as seen in Figure FIGREF42 , the context of an argumentative unit may change from “However” to “Moreover” – which can happen naturally in open debates. The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model. We calculate the ANOVA classification F scores of the features with respect to our three task formulations INLINEFORM0 and INLINEFORM1 . The F percentiles of features extracted from the EAU surrounding text ( INLINEFORM2 ) and features extracted from the EAU span ( INLINEFORM3 ), are displayed in Figure FIGREF50 . It clearly stands out that features obtained from the EAU surrounding context ( INLINEFORM0 ) are assigned much higher scores compared to features stemming from the EAU span ( INLINEFORM1 ). This holds true for all three task formulations and provides further evidence that models – when given the option – put a strong focus on contextual clues while neglecting the information provided by the EAU span itself. Discussion While competitive systems for argumentative relation classification are considered to be robust, our experiments have shown that despite confidence-inspiring scores on unseen testing data, such systems can easily be fooled – they can deliver strong performance scores although the classifier does not have access to the content of the EAUs. In this respect, we have provided evidence that there is a danger in case models focus too much on rhetorical indicators, in detriment of the context. Thus, the following question arises: How can we prevent argumentation models from modeling arguments or argumentative units and their relations in overly naïve ways? A simple and intuitive way is to dissect EAUs from their surrounding document context. Models trained on data that is restricted to the EAUs' content will be forced to focus on the content of EAUs. We believe that this will enhance the robustness of such models and allows them to generalize to cross-document argument relation classification. The corpus of student essays makes such transformations straightforward: only the EAUs were annotated (e.g., “However, INLINEFORM0 A INLINEFORM1 ”). If annotations extend over the EAUs (e.g., only full sentences are annotated, “ INLINEFORM2 However, A INLINEFORM3 ”), such transformations could be performed automatically after a discourse parsing step. When inspecting the student essays corpus, we further observed that an EAU mining step should involve coreference resolution to better capture relations between EAUs that involve anaphors (e.g., “Exercising makes you feel better” and “It INLINEFORM4 increases endorphin levels”). Thus, in order to conduct real-world end-to-end argumentation relation mining for a given topic, we envision a system that addresses three steps: (i) mining of EAUs and (ii) replacement of pronouns in EAUs with referenced entities (e.g., INLINEFORM0 ). Finally (iii), given the cross product of mined EAUs we can apply a model of type INLINEFORM1 to construct a full-fledged argumentation graph, possibly spanning multiple documents. We have shown that in order to properly perform step (iii), we need stronger models that are able to better model EAU contents. Hence, we encourage the argumentation community to test their systems on a decontextualized version of the student essays, including the proposed – and possibly further extended – testing setups, to challenge the semantic representation and reasoning capacities of argument analysis models. This will lead to more realistic performance estimates and increased robustness of systems when addressing desirable multi-document tasks. Conclusion We have shown that systems which put too much focus on discourse information may be easily fooled – an issue which has severe implications when systems are applied to cross-document argumentative relation classification tasks. The strong reliance on contextual clues is also problematic in single-document contexts, where systems can run a risk of assigning relation labels relying on contextual and rhetorical effects – instead of focusing on content. Hence, we propose that researchers test their argumentative relation classification systems on two alternative versions of the StudentEssay data that reflect different access levels. (i) EAU-span only, where systems only see the EAU spans and (ii) context-only, where systems can only see the EAU-surrounding context. These complementary settings will (i) challenge the semantic capacities of a system, and (ii) unveil the extent to which a system is focusing on the discourse context when making decisions. We will offer our testing environments to the research community through a platform that provides datasets and scripts and a table to trace the results of content-based systems. Acknowledgments This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant no. GRK 1994/1 and by the Leibniz ScienceCampus “Empirical Linguistics and Computational Language Modeling”, supported by the Leibniz Association under grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-Württemberg.
Answer with content missing: (Data and pre-processing section) The data is suited for our experiments because the annotators were explicitly asked to provide annotations on a clausal level.
f0946fb9df9839977f4d16c43476e4c2724ff772
f0946fb9df9839977f4d16c43476e4c2724ff772_0
Q: How are elementary argumentative units defined? Text: Introduction In recent years we have witnessed a great surge in activity in the area of computational argument analysis (e.g. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 ), and the emergence of dedicated venues such as the ACL Argument Mining workshop series starting in 2014 BIBREF4 . Argumentative relation classification is a sub-task of argument analysis that aims to determine relations between argumentative units A and B, for example, A supports B; A attacks B. Consider the following argumentative units (1) and (2), given the topic (0) “Marijuana should be legalized”: This example is modeled in Figure FIGREF3 . It is clear that (1) has a negative stance towards the topic and (2) has a positive stance towards the topic. Moreover, we can say that (2) attacks (1). In discourse, such a relation is often made explicit through discourse markers: (1). However, (2); On the one hand (1), on the other (2); (1), although (2); Admittedly, (2); etc. In the absence of such markers we must determine this relation by assessing the semantics of the individual argumentative units, including (often implicit) world knowledge about how they are related to each other. In this work, we show that argumentative relation classifiers – when provided with textual context surrounding an argumentative unit's span – are very prone to neglect the actual textual content of the EAU span. Instead they heavily rely on contextual markers, such as conjunctions or adverbials, as a basis for prediction. We argue that a system's capacity of predicting the correct relation based on the argumentative units' content is important in many circumstances, e.g., when an argumentative debate crosses document boundaries. For example, the prohibition of marijuana debate extends across populations and countries – argumentative units for this debate can be recovered from thousands of documents scattered across the world wide web. As a consequence, argumentative relation classification systems should not be (immensely) dependent on contextual clues – in the discussed cross-document setting these clues may even be misleading for such a system, since source and target arguments can be embedded in different textual contexts (e.g., when (1) and (2) stem from different documents it is easy to imagine a textual context where (2) is not introduced by however but instead by an `inverse' form such as e.g. moreover). Related Work It is well-known that the rhetorical and argumentative structure of texts bear great similarities. For example, BIBREF5 , BIBREF6 , BIBREF0 observe that elementary discourse units (EDUs) in RST BIBREF7 share great similarity with elementary argumentative units (EAUs) in argumentation analysis. BIBREF8 experiment with a modified version of the Microtext corpus BIBREF9 , which is an extensively annotated albeit small corpus. Similar to us, they separate argumentative units from discursive contextual markers. While BIBREF8 conduct a human evaluation to investigate the separation of Logos and Pathos aspects of arguments, our work investigates how (de-)contextualization of argumentative units affects automatic argumentative relation classification models. Argumentative Relation Prediction: Models and Features In this section, we describe different formulations of the argumentative relation classification task and describe features used by our replicated model. In order to test our hypotheses, we propose to group all features into three distinct types. Models Now, we introduce a classification of three different prediction models used in the argumentative relation prediction literature. We will inspect all of them and show that all can suffer from severe issues when focusing (too much) on the context. The model INLINEFORM0 adopts a discourse parsing view on argumentative relation prediction and predicts one outgoing edge for an argumentative unit (one-outgoing edge). Model INLINEFORM1 assumes a connected graph with argumentative units and is tasked with predicting edge labels for unit tuples (labeling relations in a graph). Finally, a model INLINEFORM2 is given two (possibly) unrelated argumentative units and is tasked with predicting connections as well as edge labels (joint edge prediction and labeling). BIBREF13 divide the task into relation prediction INLINEFORM0 and relation class assignment INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM0 which the authors describe as argumentative relation identification ( INLINEFORM0 ) and stance detection ( INLINEFORM1 ). In their experiments, INLINEFORM2 , i.e., no distinction is made between features that access only the argument content (EAU span) or only the EAU's embedding context, and some features also consider both (e.g., discourse features). This model adopts a parsing view on argumentative relation classification: every unit is allowed to have only one type of outgoing relation (this follows trivially from the fact that INLINEFORM3 has only one input). Applying such a model to argumentative attack and support relations might impose unrealistic constraints on the resulting argumentation graph: A given premise might in fact attack or support several other premises. The approach may suffice for the case of student argumentative essays, where EAUs are well-framed in a discourse structure, but seems overly restrictive for many other scenarios. Another way of framing the task, is to learn a function DISPLAYFORM0 Here, an argumentative unit is allowed to be in a attack or support relation to multiple other EAUs. Yet, both INLINEFORM0 and INLINEFORM1 assume that inputs are already linked and only the class of the link is unknown. Thus, we might also model the task in a three-class classification setting to learn a more general function that performs relation prediction and classification jointly (see also, e.g., BIBREF10 ): DISPLAYFORM0 The model described by Eq. EQREF22 is the most general one: not only does it assume a graph view on argumentative units and their relations (as does Eq. EQREF20 ); in model formulation (Eq. EQREF22 ), an argumentative unit can have no or multiple support or attack relations. It naturally allows for cases where an argumentative unit INLINEFORM0 (supports INLINEFORM1 INLINEFORM2 attacks INLINEFORM3 INLINEFORM4 is-unrelated-to INLINEFORM5 ). Given a set of EAUs mined from different documents, this model enables us to construct a full-fledged argumentation graph. Feature implementation Our feature implementation follows the feature descriptions for Stance recognition and link identification in BIBREF13 . These features and variations of them have been used successfully in several successive works (cf. BIBREF1 , BIBREF16 , BIBREF15 ). For any model the features are indexed by INLINEFORM0 . We create a function INLINEFORM1 which maps from feature indices to feature types. In other words, INLINEFORM2 tells us, for any given feature, whether it is content-based ( INLINEFORM3 ), content-ignorant ( INLINEFORM4 ) or full access ( INLINEFORM5 ). The features for, e.g., the joint prediction model INLINEFORM6 of type INLINEFORM7 ( INLINEFORM8 ) can then simply be described as INLINEFORM9 . Recall that features computed on the basis of the EAU span are content-based ( INLINEFORM10 ), features from the EAU-surrounding text are content-ignorant ( INLINEFORM11 ) and features computed from both are denoted by full-access ( INLINEFORM12 ). Details on the extraction of features are provided below. These consist of boolean values indicating whether a certain word appears in the argumentative source or target EAU or both (and separately, their contexts). More precisely, for any classification instance we extract uni-grams from within the span of the EAU (if INLINEFORM0 ) or solely from the sentence-context surrounding the EAUs (if INLINEFORM1 ). Words which occur in both bags are only visible in the full-access setup INLINEFORM2 and are modeled as binary indicators. Such features consist of syntactic production rules extracted from constituency trees – they are modelled analogously to the lexical features as a bag of production rules. To make a clear division between features derived from the EAU embedding context and features derived from within the EAU span, we divide the constituency tree in two parts, as is illustrated in Figure FIGREF26 . If the EAU is embedded in a covering sentence, we cut the syntax tree at the corresponding edge ( in Figure FIGREF26 ). In this example, the content-ignorant ( INLINEFORM0 ) bag-of-word production rule representation includes the rules INLINEFORM1 and INLINEFORM2 . Analogously to the lexical features, the production rules are modeled as binary indicator features. These features describe shallow statistics such as the ratio of argumentative unit tokens compared to sentence tokens or the position of the argumentative unit in the paragraph. We set these features to zero for the content representation of the argumentative unit and replicate those features that allow us to treat the argumentative unit as a black-box. For example, in the content-based ( INLINEFORM0 ) system that has access only to the EAU, we can compute the #tokens in the EAU, but not the #tokens in EAU divided by #tokens in the sentence. The latter feature is only accessible in the full access system variants. Hence, in the content-based ( INLINEFORM1 ) system most of these statistics are set to zero since they cannot be computed by considering only the EAU span. For the content-based representation we retrieve only discourse relations that are confined within the span of the argumentative unit. In the very frequent case that discourse features cross the boundaries of embedding context and EAU span, we only take them into account for INLINEFORM0 . We use the element-wise sum of 300-dimensional pre-trained GloVe vectors BIBREF24 corresponding to the words within the EAU span ( INLINEFORM0 ) and the words of the EAU-surrounding context ( INLINEFORM1 ). Additionally, we compute the element-wise subtraction of the source EAU vector from the target EAU vector, with the aim of modelling directions in distributional space, similarly to BIBREF25 . Words with no corresponding pre-trained word vector and empty sequences (e.g., no preceding context available) are treated as a zero-vector. Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors. Results Our first step towards our main experiments is to replicate the competitive argumentative relation classifier of BIBREF13 , BIBREF1 . Hence, for comparison purposes, we first formulate the task exactly as it was done in this prior work, using the model formulation in Eq. EQREF17 , which determines the type of outgoing edge from a source (i.e., tree-like view). The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features. The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 ). At first glance, the results of the purely EAU focused systems ( INLINEFORM0 ) are disappointing, since they fall far behind their competitors. On the other hand, their F1 scores are not devastatingly bad. The strong most-frequent-class baseline is significantly outperformed by the content-based ( INLINEFORM1 ) system, across all three prediction settings. In summary our findings are as follows: (i) models which see the EAU span (content-based, INLINEFORM0 ) are significantly outperformed by models that have no access to the span itself (content-ignorant, INLINEFORM1 ) across all settings; (ii) in two of three prediction settings ( INLINEFORM2 and INLINEFORM3 ), the model which only has access to the context even outperforms the model that has access to all information in the input. The fact that using features derived exclusively from the EAU embedding context ( INLINEFORM4 ) can lead to better results than using a full feature-system ( INLINEFORM5 ) suggests that some information from the EAU can even be harmful. Why this is the case, we cannot answer exactly. A plausible cause might be related to the smaller dimension of the feature space, which makes the SVM less likely to overfit. Still, this finding comes as a surprise and calls for further investigation in future work. A system for argumentative relation classification can be applied in one of two settings: single-document or cross-document, as illustrated in Figure FIGREF42 : in the first case (top), a system is tasked to classify EAUs that appear linearly in one document – here contextual clues can often highlight the relationship between two units. This is the setting we have been considering up to now. However, in the second scenario (bottom), we have moved away from the closed single-document setting and ask the system to classify two EAUs extracted from different document contexts. This setting applies, for instance, when we are mining arguments from multiple sources. In both cases, however, a system that relies more on contextual clues than on the content expressed in the EAUs is problematic: in the single-document setting, such a system will rely on discourse indicators – whether or not they are justified by content – and can thus easily be fooled. In the cross-document setting, discourse-based indicators – being inherently defined with respect to their internal document context – do not have a defined rhetorical function with respect to EAUs in a separate document and thus a system that has learned to rely on such markers within a single-document setting can be seriously misled. We believe that the cross-document setting should be an important goal in argumentation analysis, since it generalizes better to many debates of interest, where EAUs can be found scattered across thousands of documents. For example, for the topic of legalizing marijuana, EAUs may be mined from millions of documents and thus their relations may naturally extend across document boundaries. If a system learns to over-proportionally attend to the EAUs' surrounding contexts it is prone to making many errors. In what follows we are simulating the effects that an overly context-sensitive classifier could have in a cross-document setting, by modifying our experimental setting, and study the effects on the different model types: In one setup – we call it randomized-context – we systematically distort the context of our testing instances by exchanging the context in a randomized manner; in the other setting – called no-context, we are deleting the context around the ADUs to be classified. Randomized-context simulates an open world debate where argumentative units may occur in different contexts, sometimes with discourse markers indicating an opposite class. In other words, in this setting we want to examine effects when porting a context-sensitive system to a multi-document setting. For example, as seen in Figure FIGREF42 , the context of an argumentative unit may change from “However” to “Moreover” – which can happen naturally in open debates. The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model. We calculate the ANOVA classification F scores of the features with respect to our three task formulations INLINEFORM0 and INLINEFORM1 . The F percentiles of features extracted from the EAU surrounding text ( INLINEFORM2 ) and features extracted from the EAU span ( INLINEFORM3 ), are displayed in Figure FIGREF50 . It clearly stands out that features obtained from the EAU surrounding context ( INLINEFORM0 ) are assigned much higher scores compared to features stemming from the EAU span ( INLINEFORM1 ). This holds true for all three task formulations and provides further evidence that models – when given the option – put a strong focus on contextual clues while neglecting the information provided by the EAU span itself. Discussion While competitive systems for argumentative relation classification are considered to be robust, our experiments have shown that despite confidence-inspiring scores on unseen testing data, such systems can easily be fooled – they can deliver strong performance scores although the classifier does not have access to the content of the EAUs. In this respect, we have provided evidence that there is a danger in case models focus too much on rhetorical indicators, in detriment of the context. Thus, the following question arises: How can we prevent argumentation models from modeling arguments or argumentative units and their relations in overly naïve ways? A simple and intuitive way is to dissect EAUs from their surrounding document context. Models trained on data that is restricted to the EAUs' content will be forced to focus on the content of EAUs. We believe that this will enhance the robustness of such models and allows them to generalize to cross-document argument relation classification. The corpus of student essays makes such transformations straightforward: only the EAUs were annotated (e.g., “However, INLINEFORM0 A INLINEFORM1 ”). If annotations extend over the EAUs (e.g., only full sentences are annotated, “ INLINEFORM2 However, A INLINEFORM3 ”), such transformations could be performed automatically after a discourse parsing step. When inspecting the student essays corpus, we further observed that an EAU mining step should involve coreference resolution to better capture relations between EAUs that involve anaphors (e.g., “Exercising makes you feel better” and “It INLINEFORM4 increases endorphin levels”). Thus, in order to conduct real-world end-to-end argumentation relation mining for a given topic, we envision a system that addresses three steps: (i) mining of EAUs and (ii) replacement of pronouns in EAUs with referenced entities (e.g., INLINEFORM0 ). Finally (iii), given the cross product of mined EAUs we can apply a model of type INLINEFORM1 to construct a full-fledged argumentation graph, possibly spanning multiple documents. We have shown that in order to properly perform step (iii), we need stronger models that are able to better model EAU contents. Hence, we encourage the argumentation community to test their systems on a decontextualized version of the student essays, including the proposed – and possibly further extended – testing setups, to challenge the semantic representation and reasoning capacities of argument analysis models. This will lead to more realistic performance estimates and increased robustness of systems when addressing desirable multi-document tasks. Conclusion We have shown that systems which put too much focus on discourse information may be easily fooled – an issue which has severe implications when systems are applied to cross-document argumentative relation classification tasks. The strong reliance on contextual clues is also problematic in single-document contexts, where systems can run a risk of assigning relation labels relying on contextual and rhetorical effects – instead of focusing on content. Hence, we propose that researchers test their argumentative relation classification systems on two alternative versions of the StudentEssay data that reflect different access levels. (i) EAU-span only, where systems only see the EAU spans and (ii) context-only, where systems can only see the EAU-surrounding context. These complementary settings will (i) challenge the semantic capacities of a system, and (ii) unveil the extent to which a system is focusing on the discourse context when making decisions. We will offer our testing environments to the research community through a platform that provides datasets and scripts and a table to trace the results of content-based systems. Acknowledgments This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant no. GRK 1994/1 and by the Leibniz ScienceCampus “Empirical Linguistics and Computational Language Modeling”, supported by the Leibniz Association under grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-Württemberg.
Unanswerable
e51d0c2c336f255e342b5f6c3cf2a13231789fed
e51d0c2c336f255e342b5f6c3cf2a13231789fed_0
Q: Which Twitter corpus was used to train the word vectors? Text: Introduction Word semantic similarity task is an important part of contemporary NLP. It can be applied in many areas, like word sense disambiguation, information retrieval, information extraction and others. It has long history of improvements, starting with simple models, like bag-of-words (often weighted by TF-IDF score), continuing with more complex ones, like LSA BIBREF0 , which attempts to find “latent” meanings of words and phrases, and even more abstract models, like NNLM BIBREF1 . Latest results are based on neural network experience, but are far more simple: various versions of Word2Vec, Skip-gram and CBOW models BIBREF2 , which currently show the State-of-the-Art results and have proven success with morphologically complex languages like Russian BIBREF3 , BIBREF4 . These are corpus-based approaches, where one computes or trains the model from a large corpus. They usually consider some word context, like in bag-of-words, where model is simple count of how often can some word be seen in context of a word being described. This model anyhow does not use semantic information. A step in semantic direction was made by LSA, which requires SVD transformation of co-occurrence matrix and produces vectors with latent, unknown structure. However, this method is rather computationally expensive, and can rarely be applied to large corpora. Distributed language model was proposed, where every word is initially assigned a random fixed-size vector. During training semantically close vectors (or close by means of context) become closer to each other; as matter of closeness the cosine similarity is usually chosen. This trick enables usage of neural networks and other machine learning techniques, which easily deal with fixed-size real vectors, instead of large and sparse co-occurrence vectors. It is worth mentioning non-corpus based techniques to estimate word semantic similarity. They usually make use of knowledge databases, like WordNet, Wikipedia, Wiktionary and others BIBREF5 , BIBREF6 . It was shown that Wikipedia data can be used in graph-based methods BIBREF7 , and also in corpus-based ones. In this paper we are not focusing on non-corpus based techniques. In this paper we concentrate on usage of Russian Twitter stream as training corpus for Word2Vec model in semantic similarity task, and show results comparable with current (trained on a single corpus). This research is part of molva.spb.ru project, which is a trending topic detection engine for Russian Twitter. Thus the choice of language of interest is narrowed down to only Russian, although there is strong intuition that one can achieve similar results with other languages. Goals of this paper The primary goal of this paper is to prove usefulness of Russian Twitter stream as word semantic similarity resource. Twitter is a popular social network, or also called "microblogging service", which enables users to share and interact with short messages instantly and publicly (although private accounts are also available). Users all over the world generate hundreds of millions of tweets per day, all over the world, in many languages, generating enormous amount of verbal data. Traditional corpora for the word semantic similarity task are News, Wikipedia, electronic libraries and others (e.g. RUSSE workshop BIBREF4 ). It was shown that type of corpus used for training affects the resulting accuracy. Twitter is not usually considered, and intuition behind this is that probably every-day language is too simple and too occasional to produce good results. On the other hand, the real-time nature of this user message stream seems promising, as it may reveal what certain word means in this given moment. The other counter-argument against Twitter-as-Dataset is the policy of Twitter, which disallows publication of any dump of Twitter messages larger than 50K . However, this policy permits publication of Twitter IDs in any amount. Thus the secondary goal of this paper is to describe how to create this kind of dataset from scratch. We provide the sample of Twitter messages used, as well as set of Twitter IDs used during experiments . Previous work Semantic similarity and relatedness task received significant amount of attention. Several "Gold standard" datasets were produced to facilitate the evaluation of algorithms and models, including WordSim353 BIBREF8 , RG-65 BIBREF9 for English language and others. These datasets consist of several pairs of words, where each pair receives a score from human annotators. The score represents the similarity between two words, from 0% (not similar) to 100% (identical meaning, words are synonyms). Usually these scores are filled out by a number of human annotators, for instance, 13 in case of WordSim353 . The inter-annotator agreement is measured and the mean value is put into dataset. Until recent days there was no such dataset for Russian language. To mitigate this the “RUSSE: The First Workshop on Russian Semantic Similarity” BIBREF4 was conducted, producing RUSSE Human-Judgements evaluation dataset (we will refer to it as HJ-dataset). RUSSE dataset was constructed the following way. Firstly, datasets WordSim353, MC BIBREF10 and RG-65 were combined and translated. Then human judgements were obtained by crowdsourcing (using custom implementation). Final size of the dataset is 333 word pairs, it is available on-line. The RUSSE contest was followed by paper from its organizers BIBREF4 and several participators BIBREF3 , BIBREF11 , thus filling the gap in word semantic similarity task for Russian language. In this paper we evaluate a Word2Vec model, trained on Russian Twitter corpus against RUSSE HJ-dataset, and show results comparable to top results of other RUSSE competitors. Data processing In this section we describe how we receive data from Twitter, how we filter it and how we feed it to the model. Acquiring data Twitter provides well-documented API, which allows to request any information about Tweets, users and their profiles, with respect to rate limits. There is special type of API, called Streaming API, that provides a real-time stream of tweets. The key difference with regular API is that connection is kept alive as long as possible, and Tweets are sent in real-time to the client. There are three endpoints of Streaming API of our interest: “sample”, “filter” and “firehose”. The first one provides a sample (random subset) of the full Tweet stream. The second one allows to receive Tweets matching some search criteria: matching to one or more search keywords, produced by subset of users, or coming from certain geo location. The last one provides the full set of Tweets, although it is not available by default. In order to get Twitter “firehose” one can contact Twitter, or buy this stream from third-parties. In our case the simplest approach would be to use “sample” endpoint, but it provides Tweets in all possible languages from all over the World, while we are concerned only about one language (Russian). In order to use this endpoint we implemented filtering based on language. The filter is simple: if Tweet does not contain a substring of 3 or more cyrillic symbols, it is considered non-Russian. Although this approach keeps Tweets in Mongolian, Ukrainian and other slavic languages (because they use cyrillic alphabet), the total amount of false-positives in this case is negligible. To demonstrate this we conducted simple experiment: on a random sample of 200 tweets only 5 were in a language different from Russian. In order not to rely on Twitter language detection, we chose to proceed with this method of language-based filtering. However, the amount of Tweets received through “sample” endpoint was not satisfying. This is probably because “sample” endpoint always streams the same content to all its clients, and small portion of it comes in Russian language. In order to force mining of Tweets in Russian language, we chose "filter" endpoint, which requires some search query. We constructed heuristic query, containing some auxiliary words, specific to Russian language: conjunctions, pronouns, prepositions. The full list is as follows: russian я, у, к, в, по, на, ты, мы, до, на, она, он, и, да. We evaluated our search query on data obtained from “sample” endpoint, and 95% of Tweets matched it. We consider this coverage as reasonable and now on use “filter” endpoint with the query and language filtering described above. In this paper we work with Tweet stream acquired from 2015/07/21 till 2015/08/04. We refer to parts of the dataset by the day of acquisition: 2015/07/21, etc. Tweet IDs used in our experiments are listed on-line. Corpus preprocessing Corpus-based algorithms like BoW and Word2Vec require text to be tokenized, and sometimes to be stemmed as well. It is common practice to filter out Stop-Words (e.g. BIBREF11 ), but in this work we don’t use it. Morphological richness of Russian language forces us to use stemming, even though models like Word2Vec does not require it. In our experiments stemmed version performs significantly better than unstemmed, so we only report results of stemmed one. To do stemming we use Yandex Tomita Parser , which is an extractor of simple facts from text in Russian language. It is based on Yandex stemmer mystem BIBREF12 . It requires a set of grammar rules and facts (i.e. simple data structures) to be extracted. In this paper we use it with one simple rule: S -> Word interp (SimpleFact.Word); This rule tells parser to interpret each word it sees and return it back immediately. We use Tomita Parser as we find it more user-friendly than mystem. Tomita Parser performs following operations: sentence splitting, tokenization, stemming, removing punctuation marks, transforming words to lowercase. Each Tweet is transformed into one or several lines of tab-separated sequences of words (if there are several sentences or lines in a Tweet). Twitter-specific “Hashtags” and “User mentions” are treated by Tomita Parser as normal words, except that “@” and “#” symbols are stripped off. HJ-dataset contains non-lemmatized words. This is understandable, because the task of this dataset was oriented to human annotators. In several cases plural form is used (consider this pair: "russianтигр, russianкошачьи"). In order to compute similarity for those pairs, and having in mind that Twitter data is pre-stemmed, we have to stem HJ-dataset with same parser as well. Training the model We use Word2Vec to obtain word vectors from Twitter corpus. In this model word vectors are initialized randomly for each unique word and are fed to a sort of neural network. Authors of Word2Vec propose two different models: Skip-gram and CBOW. The first one is trained to predict the context of the word given just the word vector itself. The second one is somewhat opposite: it is trained to predict the word vector given its context. In our study CBOW always performs worse than Skip-gram, hence we describe only results with Skip-gram model. Those models have several training parameters, namely: vector size, size of vocabulary (or minimal frequency of a word), context size, threshold of downsampling, amount of training epochs. We choose vector size based on size of corpus. We use “context size” as “number of tokens before or after current token”. In all experiments presented in this paper we use one training epoch. There are several implementations of Word2Vec available, including original C utility and a Python library gensim. We use the latter one as we find it more convenient. Output of Tomita Parser is fed directly line-by-line to the model. It produces the set of vectors, which we then query to obtain similarity between word vectors, in order to compute the correlation with HJ-dataset. To compute correlation we use Spearman coefficient, since it was used as accuracy measure in RUSSE BIBREF4 . Experimental results In this section we describe properties of data obtained from Twitter, describe experiment protocols and results. Properties of the data In order to train Word2Vec model for semantic similarity task we collected Twitter messages for 15 full days, from 2015/07/21 till 2015/08/04. Each day contains on average 3M of Tweets and 40M of tokens. All properties measured are shown in Table 1. Our first observation was that given one day of Twitter data we cannot estimate all of the words from HJ-dataset, because they appear too rarely. We fixed the frequency threshold on value of 40 occurrences per day and counted how many words from HJ-dataset are below this threshold. Our second observation was that words "missing" from HJ-dataset are different from day to day. This is not very surprising having in mind the dynamic nature of Twitter data. Thus estimation of word vectors is different from day to day. In order to estimate the fluctuation of this semantic measure, we conduct training of Word2Vec on each day in our corpus. We fix vector size to 300, context size to 5, downsampling threshold to 1e-3, and minimal word occurrence threshold (also called min-freq) to 40. The results are shown in Table 2. Mean Spearman correlation between daily Twitter splits and HJ-dataset is 0.36 with std.dev. of 0.04. Word pairs for missing words (infrequent ones) were excluded. We also create superset of all infrequent words, i.e. words having frequency below 40 in at least one daily split. This set contains 50 words and produces 76 "infrequent word" pairs (out of 333). Every pair containing at least one infrequent word was excluded. On that subset of HJ-dataset mean correlation is 0.29 with std.dev. of 0.03. We consider this to be reasonably stable result. Determining optimal corpus size Word2Vec model was designed to be trained on large corpora. There are results of training it in reasonable time with corpus size of 1 billion of tokens BIBREF2 . It was mentioned that accuracy of estimated word vectors improves with size of corpus. Twitter provides an enormous amount of data, thus it is a perfect job for Word2Vec. We fix parameters for the model with following values: vector size of 300, min-freq of 40, context size of 5 and downsampling of 1e-3. We train our model subsequently with 1, 7 and 15 days of Twitter data (each starting with 07/21 and followed by subsequent days) . The largest corpus of 15 days contains 580M tokens. Results of training are shown in Table 3. In this experiment the best result belongs to 7-day corpus with 0.56 correlation with HJ-dataset, and 15-day corpus has a little less, 0.55. This can be explained by following: in order to achieve better results with Word2Vec one should increase both corpus and vector sizes. Indeed, training model with vector size of 600 on full Twitter corpus (15 days) shows the best result of 0.59. It is also worth noting that number of "missing" pairs is negligible in 7-days corpus: the only missing word (and pair) is "russianйель", Yale, the name of university in the USA. There are no "missing" words in 15-days corpus. Training the model on 15-days corpus took 8 hours on our machine with 2 cores and 4Gb of RAM. We have an intuition that further improvements are possible with larger corpus. Comparing our results to ones reported by RUSSE participants, we conclude that our best result of 0.598 is comparable to other results, as it (virtually) encloses the top-10 of results. However, best submission of RUSSE has huge gap in accuracy of 0.16, compared to our Twitter corpus. Having in mind that best results in RUSSE combine several corpora, it is reasonable to compare Twitter results to other single-corpus results. For convenience we replicate results for these corpora, originally presented in BIBREF4 , alongside with our result in Table 5. Given these considerations we conclude that with size of Twitter corpus of 500M one can achieve reasonably good results on task of word semantic similarity. Determining optimal context size Authors of Word2Vec BIBREF2 and Paragraph Vector BIBREF13 advise to determine the optimal context size for each distinct training session. In our Twitter corpus average length of the sentence appears to be 9.8 with std.dev. of 4.9; it means that most of sentences have less than 20 tokens. This is one of peculiarities of Twitter data: Tweets are limited in size, hence sentences are short. Context size greater than 10 is redundant. We choose to train word vectors with 3 different context size values: 2, 5, 10. We make two rounds of training: first one, with Twitter data from days from 07/21 till 07/25, and second, from 07/26 till 07/30. Results of measuring correlation with HJ-dataset are shown in Table 4. According to these results context size of 5 is slightly better than others, but the difference is negligible compared to fluctuation between several attempts of training. Some further observations Vector space model is capable to give more information than just measure of semantic distance of two given words. It was shown that word vectors can have multiple degrees of similarity. In particular, it is possible to model simple relations, like "country"-"capital city", gender, syntactic relations with algebraic operations over these vectors. Authors of BIBREF2 propose to assess quality of these vectors on task of exact prediction of these word relations. However, word vectors learned from Twitter seem to perform poorly on this task. We don’t make systematic research on this subject here because it goes outside of the scope of the current paper, though it is an important direction of future studies. Twitter post often contains three special types of words: user mentions, hashtags and hyperlinks. It can be beneficial to filter them (consider as Stop-Words). In results presented in this paper, and in particular in Tables 3 and 4, we don’t filter such words. It is highly controversial if one should remove hashtags from analysis since they are often valid words or multiwords. It can also be beneficial, in some tasks, to estimate word vectors for a username. Hyperlinks in Twitter posts are mandatory shortened. It is not clear how to treat them: filter out completely, keep them or even un-short them. However, some of our experiments show that filtering of "User Mentions" and hyperlinks can improve accuracy on the word semantic relatedness task by 3-5%. Conclusion In this paper we investigated the use of Twitter corpus for training Word2Vec model for task of word semantic similarity. We described a method to obtain stream of Twitter messages and prepare them for training. We use HJ-dataset, which was created for RUSSE contest BIBREF4 to measure correlation between similarity of word vectors and human judgements on word pairs similarity. We achieve results comparable with results obtained while training Word2Vec on traditional corpora, like Wikipedia and Web pages BIBREF3 , BIBREF11 . This is especially important because Twitter data is highly dynamic, and traditional sources are mostly static (rarely change over time). Thus verbal data acquired from Twitter may be used to estimate word vectors for neologisms, or determine other changes in word semantic, as soon as they appear in human speech.
They collected tweets in Russian language using a heuristic query specific to Russian
5b6aec1b88c9832075cd343f59158078a91f3597
5b6aec1b88c9832075cd343f59158078a91f3597_0
Q: How does proposed word embeddings compare to Sindhi fastText word representations? Text: Introduction Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure BIBREF2 due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people BIBREF1. The research on SNLP was coined in 2002, however, IT grabbed research attention after the development of its Unicode system BIBREF3. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources. The Language Resources (LRs) are fundamental elements for the development of high quality NLP systems based on automatic or NN based approaches. The LRs include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. The development of such resources has received great research interest for the digitization of human languages BIBREF4. Many world languages are rich in such language processing resources integrated in their software tools including English BIBREF5 BIBREF6, Chinese BIBREF7 and other languages BIBREF8 BIBREF9. The Sindhi language lacks the basic computational resources BIBREF10 of a large text corpus, which can be utilized for training robust word embeddings and developing language independent NLP applications including semantic analysis, sentiment analysis, parts of the speech tagging, named entity recognition, machine translation BIBREF11, multitasking BIBREF12, BIBREF13. Presently Sindhi Persian-Arabic is frequently used for online communication, newspapers, public institutions in Pakistan, and India BIBREF1. But little work has been carried out for the development of LRs such as raw corpus BIBREF14, BIBREF15, annotated corpus BIBREF16, BIBREF17, BIBREF1, BIBREF18. In the best of our knowledge, Sindhi lacks the large unlabelled corpus which can be utilized for generating and evaluating word embeddings for Statistical Sindhi Language Processing (SSLP). One way to to break out this loop is to learn word embeddings from unlabelled corpora, which can be utilized to bootstrap other downstream NLP tasks. The word embedding is a new term of semantic vector space BIBREF19, distributed representations BIBREF20, and distributed semantic models. It is a language modeling approach BIBREF21 used for the mapping of words and phrases into $n$-dimensional dense vectors of real numbers that effectively capture the semantic and syntactic relationship with neighboring words in a geometric way BIBREF22 BIBREF23. Such as “Einstein” and “Scientist” would have greater similarity compared with “Einstein” and “doctor.” In this way, word embeddings accomplish the important linguistic concept of “a word is characterized by the company it keeps". More recently NN based models yield state-of-the-art performance in multiple NLP tasks BIBREF24 BIBREF25 with the word embeddings. One of the advantages of such techniques is they use unsupervised approaches for learning representations and do not require annotated corpus which is rare for low-resourced Sindhi language. Such representions can be trained on large unannotated corpora, and then generated representations can be used in the NLP tasks which uses a small amount of labelled data. In this paper, we address the problems of corpus construction by collecting a large corpus of more than 61 million words from multiple web resources using the web-scrappy framework. After the collection of the corpus, we carefully preprocessed for the filtration of noisy text, e.g., the HTML tags and vocabulary of the English language. The statistical analysis is also presented for the letter, word frequencies and identification of stop-words. Finally, the corpus is utilized to generate Sindhi word embeddings using state-of-the-art GloVe BIBREF26 SG and CBoW BIBREF27 BIBREF20 BIBREF24 algorithms. The popular intrinsic evaluation method BIBREF20 BIBREF28 BIBREF29 of calculating cosine similarity between word vectors and WordSim353 BIBREF30 are employed to measure the performance of the learned Sindhi word embeddings. We translated English WordSim353 word pairs into Sindhi using bilingual English to Sindhi dictionary. The intrinsic approach typically involves a pre-selected set of query terms BIBREF23 and semantically related target words, which we refer to as query words. Furthermore, we also compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) BIBREF25 word representations. To the best of our knowledge, this is the first comprehensive work on the development of large corpus and generating word embeddings along with systematic evaluation for low-resourced Sindhi Persian-Arabic. The synopsis of our novel contributions is listed as follows: We present a large corpus of more than 61 million words obtained from multiple web resources and reveal a list of Sindhi stop words. We develop a text cleaning pipeline for the preprocessing of the raw corpus. Generate word embeddings using GloVe, CBoW, and SG Word2Vec algorithms also evaluate and compare them using the intrinsic evaluation approaches of cosine similarity matrix and WordSim353. We are the first to evaluate SdfastText word representations and compare them with our proposed Sindhi word embeddings. The remaining sections of the paper are organized as; Section SECREF2 presents the literature survey regarding computational resources, Sindhi corpus construction, and word embedding models. Afterwards, Section SECREF3 presents the employed methodology, Section SECREF4 consist of statistical analysis of the developed corpus. Section SECREF5 present the experimental setup. The intrinsic evaluation results along with comparison are given in Section SECREF6. The discussion and future work are given in Section SECREF7, and lastly, Section SECREF8 presents the conclusion. Related work The natural language resources refer to a set of language data and descriptions BIBREF31 in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English BIBREF5, Stanford CoreNLP BIBREF6, LTP for Chinese BIBREF7, TectoMT for German, Russian, Arabic BIBREF8 and multilingual toolkit BIBREF9. But Sindhi language is at an early stage for the development of such resources and software tools. The corpus construction for NLP mainly involves important steps of acquisition, preprocessing, and tokenization. Initially, BIBREF14 discussed the morphological structure and challenges concerned with the corpus development along with orthographical and morphological features in the Persian-Arabic script. The raw and annotated corpus BIBREF1 for Sindhi Persian-Arabic is a good supplement towards the development of resources, including raw and annotated datasets for parts of speech tagging, morphological analysis, transliteration between Sindhi Persian-Arabic and Sindhi-Devanagari, and machine translation system. But the corpus is acquired only form Wikipedia-dumps. A survey-based study BIBREF4 provides all the progress made in the Sindhi Natural Language Processing (SNLP) with the complete gist of adopted techniques, developed tools and available resources which show that work on resource development on Sindhi needs more sophisticated efforts. The raw corpus is utilized for word segmentation BIBREF32 of Sindhi Persian-Arabic. More recently, an initiative towards the development of resources is taken BIBREF16 by open sourcing annotated dataset of Sindhi Persian-Arabic obtained from news and social blogs. The existing and proposed work is presented in Table TABREF9 on the corpus development, word segmentation, and word embeddings, respectively. The power of word embeddings in NLP was empirically estimated by proposing a neural language model BIBREF21 and multitask learning BIBREF12, but recently usage of word embeddings in deep neural algorithms has become integral element BIBREF33 for performance acceleration in deep NLP applications. The CBoW and SG BIBREF27 BIBREF20 popular word2vec neural architectures yielded high quality vector representations in lower computational cost with integration of character-level learning on large corpora in terms of semantic and syntactic word similarity later extended BIBREF33 BIBREF24. Both approaches produce state-of-the-art accuracy with fast training performance, better representations of less frequent words and efficient representation of phrases as well. BIBREF34 proposed NN based approach for generating morphemic-level word embeddings, which surpassed all the existing embedding models in intrinsic evaluation. A count-based GloVe model BIBREF26 also yielded state-of-the-art results in an intrinsic evaluation and downstream NLP tasks. The performance of Word embeddings is evaluated using intrinsic BIBREF23 BIBREF29 and extrinsic evaluation BIBREF28 methods. The performance of word embeddings can be measured with intrinsic and extrinsic evaluation approaches. The intrinsic approach is used to measure the internal quality of word embeddings such as querying nearest neighboring words and calculating the semantic or syntactic similarity between similar word pairs. A method of direct comparison for intrinsic evaluation of word embeddings measures the neighborhood of a query word in vector space. The key advantage of that method is to reduce bias and create insight to find data-driven relevance judgment. An extrinsic evaluation approach is used to evaluate the performance in downstream NLP tasks, such as parts-of-speech tagging or named-entity recognition BIBREF23, but the Sindhi language lacks annotated corpus for such type of evaluation. Moreover, extrinsic evaluation is time consuming and difficult to interpret. Therefore, we opt intrinsic evaluation method BIBREF28 to get a quick insight into the quality of proposed Sindhi word embeddings by measuring the cosine distance between similar words and using WordSim353 dataset. A study reveals that the choice of optimized hyper-parameters BIBREF35 has a great impact on the quality of pretrained word embeddings as compare to desing a novel algorithm. Therefore, we optimized the hyperparameters for generating robust Sindhi word embeddings using CBoW, SG and GloVe models. The embedding visualization is also useful to visualize the similarity of word clusters. Therefore, we use t-SNE BIBREF36 dimensionality reduction algorithm for compressing high dimensional embedding into 2-dimensional $x$,$y$ coordinate pairs with PCA BIBREF37. The PCA is useful to combine input features by dropping the least important features while retaining the most valuable features. Methodology This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings. Methodology ::: Task description We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization. Methodology ::: Corpus acquisition The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter. Methodology ::: Preprocessing The preprocessing of text corpus obtained from multiple web resources is a challenging task specially it becomes more complicated when working on low-resourced language like Sindhi due to the lack of open-source preprocessing tools such as NLTK BIBREF5 for English. Therefore, we design a preprocessing pipeline depicted in Figure FIGREF22 for the filtration of unwanted data and vocabulary of other languages such as English to prepare input for word embeddings. Whereas, the involved preprocessing steps are described in detail below the Figure FIGREF22. Moreover, we reveal the list of Sindhi stop words BIBREF38 which is labor intensive and requires human judgment as well. Hence, the most frequent and least important words are classified as stop words with the help of a Sindhi linguistic expert. The partial list of Sindhi stop words is given in TABREF61. We use python programming language for designing the preprocessing pipeline using regex and string functions. Input: The collected text documents were concatenated for the input in UTF-8 format. Replacement symbols: The punctuation marks of a full stop, hyphen, apostrophe, comma, quotation, and exclamation marks replaced with white space for authentic tokenization because without replacing these symbols with white space the words were found joined with their next or previous corresponding words. Filtration of noisy data: The text acquisition from web resources contain a huge amount of noisy data. Therefore, we filtered out unimportant data such as the rest of the punctuation marks, special characters, HTML tags, all types of numeric entities, email, and web addresses. Normalization: In this step, We tokenize the corpus then normalize to lower-case for the filtration of multiple white spaces, English vocabulary, and duplicate words. The stop words were only filtered out for preparing input for GloVe. However, the sub-sampling approach in CBoW and SG can discard most frequent or stop words automatically. Methodology ::: Word embedding models The NN based approaches have produced state-of-the-art performance in NLP with the usage of robust word embedings generated from the large unlabelled corpus. Therefore, word embeddings have become the main component for setting up new benchmarks in NLP using deep learning approaches. Most recently, the use cases of word embeddings are not only limited to boost statistical NLP applications but can also be used to develop language resources such as automatic construction of WordNet BIBREF39 using the unsupervised approach. The word embedding can be precisely defined as the encoding of vocabulary $V$ into $N$ and the word $w$ from $V$ to vector $\overrightarrow{w} $ into $N$-dimensional embedding space. They can be broadly categorized into predictive and count based methods, being generated by employing co-occurrence statistics, NN algorithms, and probabilistic models. The GloVe BIBREF26 algorithm treats each word as a single entity in the corpus and generates a vector of each word. However, CBoW and SG BIBREF27 BIBREF20, later extended BIBREF33 BIBREF24, well-known as word2vec rely on simple two layered NN architecture which uses linear activation function in hidden layer and softmax in the output layer. The work2vec model treats each word as a bag-of-character n-gram. Methodology ::: GloVe The GloVe is a log-bilinear regression model BIBREF26 which combines two methods of local context window and global matrix factorization for training word embeddings of a given vocabulary in an unsupervised way. It weights the contexts using the harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. The Glove’s implementation represents word $w \in V_{w}$ and context $c \in V_{c}$ in $D$-dimensional vectors $\overrightarrow{w}$ and $\overrightarrow{c}$ in a following way, Where, $b^{\overrightarrow{w}}$ is row vector $\left|V_{w}\right|$ and $b^{\overrightarrow{c}}$ is $\left|V_{c}\right|$ is column vector. Methodology ::: Continuous bag-of-words The standard CBoW is the inverse of SG BIBREF27 model, which predicts input word on behalf of the context. The length of input in the CBoW model depends on the setting of context window size which determines the distance to the left and right of the target word. Hence the context is a window that contain neighboring words such as by giving $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ a sequence of words $T$, the objective of the CBoW is to maximize the probability of given neighboring words such as, Where, $c_{t}$ is context of $t^{\text{th}}$ word for example with window $w_{t-c}, \ldots w_{t-1}, w_{t+1}, \ldots w_{t+c}$ of size $2 c$. Methodology ::: Skip gram The SG model predicts surrounding words by giving input word BIBREF20 with training objective of learning good word embeddings that efficiently predict the neighboring words. The goal of skip-gram is to maximize average log-probability of words $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ across the entire training corpus, Where, $c_{t}$ denotes the context of words indices set of nearby $w_{t}$ words in the training corpus. Methodology ::: Hyperparameters ::: Sub-sampling Th sub-sampling BIBREF20 approach is useful to dilute most frequent or stop words, also accelerates learning rate, and increases accuracy for learning rare word vectors. Numerous words in English, e.g., ‘the’, ‘you’, ’that’ do not have more importance, but these words appear very frequently in the text. However, considering all the words equally would also lead to over-fitting problem of model parameters BIBREF24 on the frequent word embeddings and under-fitting on the rest. Therefore, it is useful to count the imbalance between rare and repeated words. The sub-sampling technique randomly removes most frequent words with some threshold $t$ and probability $p$ of words and frequency $f$ of words in the corpus. Where each word$w_{i}$ is discarded with computed probability in training phase, $f(w_i )$ is frequency of word $w_{i}$ and $t>0$ are parameters. Methodology ::: Hyperparameters ::: Dynamic context window The traditional word embedding models usually use a fixed size of a context window. For instance, if the window size ws=6, then the target word apart from 6 tokens will be treated similarity as the next word. The scheme is used to assign more weight to closer words, as closer words are generally considered to be more important to the meaning of the target word. The CBoW, SG and GloVe models employ this weighting scheme. The GloVe model weights the contexts using a harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. However, CBoW and SG implementation equally consider the contexts by dividing the ws with the distance from target word, e.g. ws=6 will weigh its context by $\frac{6}{6} \frac{5}{6} \frac{4}{6} \frac{3}{6} \frac{2}{6} \frac{1}{6}$. Methodology ::: Hyperparameters ::: Sub-word model The sub-word model BIBREF24 can learn the internal structure of words by sharing the character representations across words. In that way, the vector for each word is made of the sum of those character $n-gram$. Such as, a vector of a word “table” is a sum of $n-gram$ vectors by setting the letter $n-gram$ size $min=3$ to $max=6$ as, $<ta, tab, tabl, table, table>, abl, able, able>, ble, ble>, le>$, we can get all sub-words of "table" with minimum length of $minn=3$ and maximum length of $maxn=6$. The $<$ and $>$ symbols are used to separate prefix and suffix words from other character sequences. In this way, the sub-word model utilizes the principles of morphology, which improves the quality of infrequent word representations. In addition to character $n-grams$, the input word $w$ is also included in the set of character $n-gram$, to learn the representation of each word. We obtain scoring function using a input dictionary of $n-grams$ with size $K$ by giving word $w$ , where $K_{w} \subset \lbrace 1, \ldots , K\rbrace $. A word representation $Z_{k}$ is associated to each $n-gram$ $Z$. Hence, each word is represented by the sum of character $n-gram$ representations, where, $s$ is the scoring function in the following equation, Methodology ::: Hyperparameters ::: Position-dependent weights The position-dependent weighting approach BIBREF40 is used to avoid direct encoding of representations for words and their positions which can lead to over-fitting problem. The approach learns positional representations in contextual word representations and used to reweight word embedding. Thus, it captures good contextual representations at lower computational cost, Where, $p$ is individual position in context window associated with $d_{p}$ vector. Afterwards the context vector reweighted by their positional vectors is average of context words. The relative positional set is $P$ in context window and $v_{C}$ is context vector of $w_{t}$ respectively. Methodology ::: Hyperparameters ::: Shifted point-wise mutual information The use sparse Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41 word-context matrix in learning word representations improves results on two word similarity tasks. The CBoW and SG have $k$ (number of negatives) BIBREF27 BIBREF20 hyperparameter, which affects the value that both models try to optimize for each $(w, c): P M I(w, c)-\log k$. Parameter $k$ has two functions of better estimation of negative examples, and it performs as before observing the probability of positive examples (actual occurrence of $w,c$). Methodology ::: Hyperparameters ::: Deleting rare words Before creating a context window, the automatic deletion of rare words also leads to performance gain in CBoW, SG and GloVe models, which further increases the actual size of context windows. Methodology ::: Evaluation methods The intrinsic evaluation is based on semantic similarity BIBREF23 in word embeddings. The word similarity measure approach states BIBREF35 that the words are similar if they appear in the similar context. We measure word similarity of proposed Sindhi word embeddings using dot product method and WordSim353. Methodology ::: Evaluation methods ::: Cosine similarity The cosine similarity between two non-zero vectors is a popular measure that calculates the cosine of the angle between them which can be derived by using the Euclidean dot product method. The dot product is a multiplication of each component from both vectors added together. The result of a dot product between two vectors isn’t another vector but a single value or a scalar. The dot product for two vectors can be defined as: $\overrightarrow{a}=\left(a_{1}, a_{2}, a_{3}, \dots , a_{n}\right)$ and $\overrightarrow{b}=\left({b}_{1}, {b}_{2}, {b}_{3}, \ldots , {b}_{n}\right)$ where $a_{n}$ and $b_{n}$ are the components of the vector and $n$ is dimension of vectors such as, However, the cosine of two non-zero vectors can be derived by using the Euclidean dot product formula, Given $a_{i}$ two vectors of attributes $a$ and $b$, the cosine similarity, $\cos ({\theta })$, is represented using a dot product and magnitude as, where $a_{i}$ and $b_{i}$ are components of vector $\overrightarrow{a}$ and $\overrightarrow{b}$, respectively. Methodology ::: Evaluation methods ::: WordSim353 The WordSim353 BIBREF42 is popular for the evaluation of lexical similarity and relatedness. The similarity score is assigned with 13 to 16 human subjects with semantic relations BIBREF30 for 353 English noun pairs. Due to the lack of annotated datasets in the Sindhi language, we translated WordSim353 using English to Sindhi bilingual dictionary for the evaluation of our proposed Sindhi word embeddings and SdfastText. We use the Spearman correlation coefficient for the semantic and syntactic similarity comparison which is used to used to discover the strength of linear or nonlinear relationships if there are no repeated data values. A perfect Spearman’s correlation of $+1$ or $-1$ discovers the strength of a link between two sets of data (word-pairs) when observations are monotonically increasing or decreasing functions of each other in a following way, where $r_s$ is the rank correlation coefficient, $n$ denote the number of observations, and $d^i$ is the rank difference between $i^{th}$ observations. Statistical analysis of corpus The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens. Statistical analysis of corpus ::: Letter occurrences The frequency of letter occurrences in human language is not arbitrarily organized but follow some specific rules which enable us to describe some linguistic regularities. The Zipf’s law BIBREF43 suggests that if the frequency of letter or word occurrence ranked in descending order such as, Where, $F_{r}$ is the letter frequency of rth rank, $a$ and $b$ are parameters of input text. The comparative letter frequency in the corpus is the total number of occurrences of a letter divided by the total number of letters present in the corpus. The letter frequencies in our developed corpus are depicted in Figure FIGREF55; however, the corpus contains 187,620,276 total number of the character set. Sindhi Persian-Arabic alphabet consists of 52 letters but in the vocabulary 59 letters are detected, additional seven letters are modified uni-grams and standalone honorific symbols. Statistical analysis of corpus ::: Letter n-grams frequency We denote the combination of letter occurrences in a word as n-grams, where each letter is a gram in a word. The letter n-gram frequency is carefully analyzed in order to find the length of words which is essential to develop NLP systems, including learning of word embeddings such as choosing the minimum or maximum length of sub-word for character-level representation learning BIBREF24. We calculate the letter n-grams in words along with their percentage in the developed corpus (see Table TABREF57). The bi-gram words are most frequent, mostly consists of stop words and secondly, 4-gram words have a higher frequency. Statistical analysis of corpus ::: Word Frequencies The word frequency count is an observation of word occurrences in the text. The commonly used words are considered to be with higher frequency, such as the word “the" in English. Similarly, the frequency of rarely used words to be lower. Such frequencies can be calculated at character or word-level. We calculate word frequencies by counting a word $w$ occurrence in the corpus $c$, such as, Where the frequency of $w$ is the sum of every occurrence $k$ of $w$ in $c$. Statistical analysis of corpus ::: Stop words The most frequent and least important words in NLP are often classified as stop words. The removal of such words can boost the performance of the NLP model BIBREF38, such as sentiment analysis and text classification. But the construction of such words list is time consuming and requires user decisions. Firstly, we determined Sindhi stop words by counting their term frequencies using Eq. DISPLAY_FORM59, and secondly, by analysing their grammatical status with the help of Sindhi linguistic expert because all the frequent words are not stop words (see Figure FIGREF62). After determining the importance of such words with the help of human judgment, we placed them in the list of stop words. The total number of detected stop words is 340 in our developed corpus. The partial list of most frequent Sindhi stop words is depicted in Table TABREF61 along with their frequency. The filtration of stop words is an essential preprocessing step for learning GloVe BIBREF26 word embeddings; therefore, we filtered out stop words for preparing input for the GloVe model. However, the sub-sampling approach BIBREF33 BIBREF24 is used to discard such most frequent words in CBoW and SG models. Experiments and results Hyperparameter optimization BIBREF23is more important than designing a novel algorithm. We carefully choose to optimize the dictionary and algorithm-based parameters of CBoW, SG and GloVe algorithms. Hence, we conducted a large number of experiments for training and evaluation until the optimization of most suitable hyperparameters depicted in Table TABREF64 and discussed in Section SECREF63. The choice of optimized hyperparameters is based on The high cosine similarity score in retrieving nearest neighboring words, the semantic, syntactic similarity between word pairs, WordSim353, and visualization of the distance between twenty nearest neighbours using t-SNE respectively. All the experiments are conducted on GTX 1080-TITAN GPU. Experiments and results ::: Hyperparameter optimization The state-of-the-art SG, CBoW BIBREF27 BIBREF33 BIBREF20 BIBREF24 and Glove BIBREF26 word embedding algorithms are evaluated by parameter tuning for development of Sindhi word embeddings. These parameters can be categories into dictionary and algorithm based, respectively. The integration of character n-gram in learning word representations is an ideal method especially for rich morphological languages because this approach has the ability to compute rare and misspelled words. Sindhi is also a rich morphological language. Therefore more robust embeddings became possible to train with the hyperparameter optimization of SG, CBoW and GloVe algorithms. We tuned and evaluated the hyperparameters of three algorithms individually which are discussed as follows: Number of Epochs: Generally, more epochs on the corpus often produce better results but more epochs take long training time. Therefore, we evaluate 10, 20, 30 and 40 epochs for each word embedding model, and 40 epochs constantly produce good results. Learning rate (lr): We tried lr of $0.05$, $0.1$, and $0.25$, the optimal lr $(0.25)$ gives the better results for training all the embedding models. Dimensions ($D$): We evaluate and compare the quality of $100-D$, $200-D$, and $300-D$ using WordSim353 on different $ws$, and the optimal $300-D$ are evaluated with cosine similarity matrix for querying nearest neighboring words and calculating the similarity between word pairs. The embedding dimensions have little affect on the quality of the intrinsic evaluation process. However, the selection of embedding dimensions might have more impact on the accuracy in certain downstream NLP applications. The lower embedding dimensions are faster to train and evaluate. Character n-grams: The selection of minimum (minn) and the maximum (maxn) length of character $n-grams$ is an important parameter for learning character-level representations of words in CBoW and SG models. Therefore, the n-grams from $3-9$ were tested to analyse the impact on the accuracy of embedding. We optimized the length of character n-grams from $minn=2$ and $maxn=7$ by keeping in view the word frequencies depicted in Table TABREF57. Window size (ws): The large ws means considering more context words and similarly less ws means to limit the size of context words. By changing the size of the dynamic context window, we tried the ws of 3, 5, 7 the optimal ws=7 yield consistently better performance. Negative Sampling (NS): : The more negative examples yield better results, but more negatives take long training time. We tried 10, 20, and 30 negative examples for CBoW and SG. The best negative examples of 20 for CBoW and SG significantly yield better performance in average training time. Minimum word count (minw): We evaluated the range of minimum word counts from 1 to 8 and analyzed that the size of input vocabulary is decreasing at a large scale by ignoring more words similarly the vocabulary size was increasing by considering rare words. Therefore, by ignoring words with a frequency of less than 4 in CBoW, SG, and GloVe consistently yields better results with the vocabulary of 200,000 words. Loss function (ls): we use hierarchical softmax (hs) for CBoW, negative sampling (ns) for SG and default loss function for GloVe BIBREF26. The recommended verbosity level, number of buckets, sampling threshold, number of threads are used for training CBoW, SG BIBREF24, and GloVe BIBREF26. Word similarity comparison of Word Embeddings ::: Nearest neighboring words The cosine similarity matrix BIBREF35 is a popular approach to compute the relationship between all embedding dimensions of their distinct relevance to query word. The words with similar context get high cosine similarity and geometrical relatedness to Euclidean distance, which is a common and primary method to measure the distance between a set of words and nearest neighbors. Each word contains the most similar top eight nearest neighboring words determined by the highest cosine similarity score using Eq. DISPLAY_FORM48. We present the English translation of both query and retrieved words also discuss with their English meaning for ease of relevance judgment between the query and retrieved words.To take a closer look at the semantic and syntactic relationship captured in the proposed word embeddings, Table TABREF74 shows the top eight nearest neighboring words of five different query words Friday, Spring, Cricket, Red, Scientist taken from the vocabulary. As the first query word Friday returns the names of days Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday in an unordered sequence. The SdfastText returns five names of days Sunday, Thursday, Monday, Tuesday and Wednesday respectively. The GloVe model also returns five names of days. However, CBoW and SG gave six names of days except Wednesday along with different writing forms of query word Friday being written in the Sindhi language which shows that CBoW and SG return more relevant words as compare to SdfastText and GloVe. The CBoW returned Add and GloVe returns Honorary words which are little similar to the querry word but SdfastText resulted two irrelevant words Kameeso (N) which is a name (N) of person in Sindhi and Phrase is a combination of three Sindhi words which are not tokenized properly. Similarly, nearest neighbors of second query word Spring are retrieved accurately as names and seasons and semantically related to query word Spring by CBoW, SG and Glove but SdfastText returned four irrelevant words of Dilbahar (N), Pharase, Ashbahar (N) and Farzana (N) out of eight. The third query word is Cricket, the name of a popular game. The first retrieved word in CBoW is Kabadi (N) that is a popular national game in Pakistan. Including Kabadi (N) all the returned words by CBoW, SG and GloVe are related to Cricket game or names of other games. But the first word in SdfastText contains a punctuation mark in retrieved word Gone.Cricket that are two words joined with a punctuation mark (.), which shows the tokenization error in preprocessing step, sixth retrieved word Misspelled is a combination of three words not related to query word, and Played, Being played are also irrelevant and stop words. Moreover, fourth query word Red gave results that contain names of closely related to query word and different forms of query word written in the Sindhi language. The last returned word Unknown by SdfastText is irrelevant and not found in the Sindhi dictionary for translation. The last query word Scientist also contains semantically related words by CBoW, SG, and GloVe, but the first Urdu word given by SdfasText belongs to the Urdu language which means that the vocabulary may also contain words of other languages. Another unknown word returned by SdfastText does not have any meaning in the Sindhi dictionary. More interesting observations in the presented results are the diacritized words retrieved from our proposed word embeddings and The authentic tokenization in the preprocessing step presented in Figure FIGREF22. However, SdfastText has returned tri-gram words of Phrase in query words Friday, Spring, a Misspelled word in Cricket and Scientist query words. Hence, the overall performance of our proposed SG, CBoW, and GloVe demonstrate high semantic relatedness in retrieving the top eight nearest neighbor words. Word similarity comparison of Word Embeddings ::: Word pair relationship Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings. Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able. Word similarity comparison of Word Embeddings ::: Comparison with WordSim353 We evaluate the performance of our proposed word embeddings using the WordSim353 dataset by translation English word pairs to Sindhi. Due to vocabulary differences between English and Sindhi, we were unable to find the authentic meaning of six terms, so we left these terms untranslated. So our final Sindhi WordSim353 consists of 347 word pairs. Table TABREF80 shows the Spearman correlation results using Eq. DISPLAY_FORM51 on different dimensional embeddings on the translated WordSim353. The Table TABREF80 presents complete results with the different ws for CBoW, SG and GloVe in which the ws=7 subsequently yield better performance than ws of 3 and 5, respectively. The SG model outperforms CBoW and GloVe in semantic and syntactic similarity by achieving the performance of 0.629 with ws=7. In comparison with English BIBREF27 achieved the average semantic and syntactic similarity of 0.637, 0.656 with CBoW and SG, respectively. Therefore, despite the challenges in translation from English to Sindhi, our proposed Sindhi word embeddings have efficiently captured the semantic and syntactic relationship. Word similarity comparison of Word Embeddings ::: Visualization We use t-Distributed Stochastic Neighboring (t-SNE) dimensionality BIBREF36 reduction algorithm with PCA BIBREF37 for exploratory embeddings analysis in 2-dimensional map. The t-SNE is a non-linear dimensionality reduction algorithm for visualization of high dimensional datasets. It starts the probability calculation of similar word clusters in high-dimensional space and calculates the probability of similar points in the corresponding low-dimensional space. The purpose of t-SNE for visualization of word embeddings is to keep similar words close together in 2-dimensional $x,y$ coordinate pairs while maximizing the distance between dissimilar words. The t-SNE has a perplexity (PPL) tunable parameter used to balance the data points at both the local and global levels. We visualize the embeddings using PPL=20 on 5000-iterations of 300-D models. We use the same query words (see Table TABREF74) by retrieving the top 20 nearest neighboring word clusters for a better understanding of the distance between similar words. Every query word has a distinct color for the clear visualization of a similar group of words. The closer word clusters show the high similarity between the query and retrieved word clusters. The word clusters in SG (see Fig. FIGREF83) are closer to their group of semantically related words. Secondly, the CBoW model depicted in Fig. FIGREF82 and GloVe Fig. FIGREF84 also show the better cluster formation of words than SdfastText Fig. FIGREF85, respectively. Discussion and future work In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet. Conclusion In this paper, we mainly present three novel contributions of large corpus development contains large vocabulary of more than 61 million tokens, 908,456 unique words. Secondly, the list of Sindhi stop words is constructed by finding their high frequency and least importance with the help of Sindhi linguistic expert. Thirdly, the unsupervised Sindhi word embeddings are generated using state-of-the-art CBoW, SG and GloVe algorithms and evaluated using popular intrinsic evaluation approaches of cosine similarity matrix and WordSim353 for the first time in Sindhi language processing. We translate English WordSim353 using the English-Sindhi bilingual dictionary, which will also be a good resource for the evaluation of Sindhi word embeddings. Moreover, the proposed word embeddings are also compared with recently revealed SdfastText word representations. Our empirical results demonstrate that our proposed Sindhi word embeddings have captured high semantic relatedness in nearest neighboring words, word pair relationship, country, and capital and WordSim353. The SG yields the best performance than CBoW and GloVe models subsequently. However, the performance of GloVe is low on the same vocabulary because of character-level learning of word representations and sub-sampling approaches in SG and CBoW. Our proposed Sindhi word embeddings have surpassed SdfastText in the intrinsic evaluation matrix. Also, the vocabulary of SdfastText is limited because they are trained on a small Wikipedia corpus of Sindhi Persian-Arabic. We will further investigate the extrinsic performance of proposed word embeddings on the Sindhi text classification task in the future. The proposed resources along with systematic evaluation will be a sophisticated addition to the computational resources for statistical Sindhi language processing.
Proposed SG model vs SINDHI FASTTEXT: Average cosine similarity score: 0.650 vs 0.388 Average semantic relatedness similarity score between countries and their capitals: 0.663 vs 0.391
a6717e334c53ebbb87e5ef878a77ef46866e3aed
a6717e334c53ebbb87e5ef878a77ef46866e3aed_0
Q: Are trained word embeddings used for any other NLP task? Text: Introduction Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure BIBREF2 due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people BIBREF1. The research on SNLP was coined in 2002, however, IT grabbed research attention after the development of its Unicode system BIBREF3. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources. The Language Resources (LRs) are fundamental elements for the development of high quality NLP systems based on automatic or NN based approaches. The LRs include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. The development of such resources has received great research interest for the digitization of human languages BIBREF4. Many world languages are rich in such language processing resources integrated in their software tools including English BIBREF5 BIBREF6, Chinese BIBREF7 and other languages BIBREF8 BIBREF9. The Sindhi language lacks the basic computational resources BIBREF10 of a large text corpus, which can be utilized for training robust word embeddings and developing language independent NLP applications including semantic analysis, sentiment analysis, parts of the speech tagging, named entity recognition, machine translation BIBREF11, multitasking BIBREF12, BIBREF13. Presently Sindhi Persian-Arabic is frequently used for online communication, newspapers, public institutions in Pakistan, and India BIBREF1. But little work has been carried out for the development of LRs such as raw corpus BIBREF14, BIBREF15, annotated corpus BIBREF16, BIBREF17, BIBREF1, BIBREF18. In the best of our knowledge, Sindhi lacks the large unlabelled corpus which can be utilized for generating and evaluating word embeddings for Statistical Sindhi Language Processing (SSLP). One way to to break out this loop is to learn word embeddings from unlabelled corpora, which can be utilized to bootstrap other downstream NLP tasks. The word embedding is a new term of semantic vector space BIBREF19, distributed representations BIBREF20, and distributed semantic models. It is a language modeling approach BIBREF21 used for the mapping of words and phrases into $n$-dimensional dense vectors of real numbers that effectively capture the semantic and syntactic relationship with neighboring words in a geometric way BIBREF22 BIBREF23. Such as “Einstein” and “Scientist” would have greater similarity compared with “Einstein” and “doctor.” In this way, word embeddings accomplish the important linguistic concept of “a word is characterized by the company it keeps". More recently NN based models yield state-of-the-art performance in multiple NLP tasks BIBREF24 BIBREF25 with the word embeddings. One of the advantages of such techniques is they use unsupervised approaches for learning representations and do not require annotated corpus which is rare for low-resourced Sindhi language. Such representions can be trained on large unannotated corpora, and then generated representations can be used in the NLP tasks which uses a small amount of labelled data. In this paper, we address the problems of corpus construction by collecting a large corpus of more than 61 million words from multiple web resources using the web-scrappy framework. After the collection of the corpus, we carefully preprocessed for the filtration of noisy text, e.g., the HTML tags and vocabulary of the English language. The statistical analysis is also presented for the letter, word frequencies and identification of stop-words. Finally, the corpus is utilized to generate Sindhi word embeddings using state-of-the-art GloVe BIBREF26 SG and CBoW BIBREF27 BIBREF20 BIBREF24 algorithms. The popular intrinsic evaluation method BIBREF20 BIBREF28 BIBREF29 of calculating cosine similarity between word vectors and WordSim353 BIBREF30 are employed to measure the performance of the learned Sindhi word embeddings. We translated English WordSim353 word pairs into Sindhi using bilingual English to Sindhi dictionary. The intrinsic approach typically involves a pre-selected set of query terms BIBREF23 and semantically related target words, which we refer to as query words. Furthermore, we also compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) BIBREF25 word representations. To the best of our knowledge, this is the first comprehensive work on the development of large corpus and generating word embeddings along with systematic evaluation for low-resourced Sindhi Persian-Arabic. The synopsis of our novel contributions is listed as follows: We present a large corpus of more than 61 million words obtained from multiple web resources and reveal a list of Sindhi stop words. We develop a text cleaning pipeline for the preprocessing of the raw corpus. Generate word embeddings using GloVe, CBoW, and SG Word2Vec algorithms also evaluate and compare them using the intrinsic evaluation approaches of cosine similarity matrix and WordSim353. We are the first to evaluate SdfastText word representations and compare them with our proposed Sindhi word embeddings. The remaining sections of the paper are organized as; Section SECREF2 presents the literature survey regarding computational resources, Sindhi corpus construction, and word embedding models. Afterwards, Section SECREF3 presents the employed methodology, Section SECREF4 consist of statistical analysis of the developed corpus. Section SECREF5 present the experimental setup. The intrinsic evaluation results along with comparison are given in Section SECREF6. The discussion and future work are given in Section SECREF7, and lastly, Section SECREF8 presents the conclusion. Related work The natural language resources refer to a set of language data and descriptions BIBREF31 in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English BIBREF5, Stanford CoreNLP BIBREF6, LTP for Chinese BIBREF7, TectoMT for German, Russian, Arabic BIBREF8 and multilingual toolkit BIBREF9. But Sindhi language is at an early stage for the development of such resources and software tools. The corpus construction for NLP mainly involves important steps of acquisition, preprocessing, and tokenization. Initially, BIBREF14 discussed the morphological structure and challenges concerned with the corpus development along with orthographical and morphological features in the Persian-Arabic script. The raw and annotated corpus BIBREF1 for Sindhi Persian-Arabic is a good supplement towards the development of resources, including raw and annotated datasets for parts of speech tagging, morphological analysis, transliteration between Sindhi Persian-Arabic and Sindhi-Devanagari, and machine translation system. But the corpus is acquired only form Wikipedia-dumps. A survey-based study BIBREF4 provides all the progress made in the Sindhi Natural Language Processing (SNLP) with the complete gist of adopted techniques, developed tools and available resources which show that work on resource development on Sindhi needs more sophisticated efforts. The raw corpus is utilized for word segmentation BIBREF32 of Sindhi Persian-Arabic. More recently, an initiative towards the development of resources is taken BIBREF16 by open sourcing annotated dataset of Sindhi Persian-Arabic obtained from news and social blogs. The existing and proposed work is presented in Table TABREF9 on the corpus development, word segmentation, and word embeddings, respectively. The power of word embeddings in NLP was empirically estimated by proposing a neural language model BIBREF21 and multitask learning BIBREF12, but recently usage of word embeddings in deep neural algorithms has become integral element BIBREF33 for performance acceleration in deep NLP applications. The CBoW and SG BIBREF27 BIBREF20 popular word2vec neural architectures yielded high quality vector representations in lower computational cost with integration of character-level learning on large corpora in terms of semantic and syntactic word similarity later extended BIBREF33 BIBREF24. Both approaches produce state-of-the-art accuracy with fast training performance, better representations of less frequent words and efficient representation of phrases as well. BIBREF34 proposed NN based approach for generating morphemic-level word embeddings, which surpassed all the existing embedding models in intrinsic evaluation. A count-based GloVe model BIBREF26 also yielded state-of-the-art results in an intrinsic evaluation and downstream NLP tasks. The performance of Word embeddings is evaluated using intrinsic BIBREF23 BIBREF29 and extrinsic evaluation BIBREF28 methods. The performance of word embeddings can be measured with intrinsic and extrinsic evaluation approaches. The intrinsic approach is used to measure the internal quality of word embeddings such as querying nearest neighboring words and calculating the semantic or syntactic similarity between similar word pairs. A method of direct comparison for intrinsic evaluation of word embeddings measures the neighborhood of a query word in vector space. The key advantage of that method is to reduce bias and create insight to find data-driven relevance judgment. An extrinsic evaluation approach is used to evaluate the performance in downstream NLP tasks, such as parts-of-speech tagging or named-entity recognition BIBREF23, but the Sindhi language lacks annotated corpus for such type of evaluation. Moreover, extrinsic evaluation is time consuming and difficult to interpret. Therefore, we opt intrinsic evaluation method BIBREF28 to get a quick insight into the quality of proposed Sindhi word embeddings by measuring the cosine distance between similar words and using WordSim353 dataset. A study reveals that the choice of optimized hyper-parameters BIBREF35 has a great impact on the quality of pretrained word embeddings as compare to desing a novel algorithm. Therefore, we optimized the hyperparameters for generating robust Sindhi word embeddings using CBoW, SG and GloVe models. The embedding visualization is also useful to visualize the similarity of word clusters. Therefore, we use t-SNE BIBREF36 dimensionality reduction algorithm for compressing high dimensional embedding into 2-dimensional $x$,$y$ coordinate pairs with PCA BIBREF37. The PCA is useful to combine input features by dropping the least important features while retaining the most valuable features. Methodology This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings. Methodology ::: Task description We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization. Methodology ::: Corpus acquisition The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter. Methodology ::: Preprocessing The preprocessing of text corpus obtained from multiple web resources is a challenging task specially it becomes more complicated when working on low-resourced language like Sindhi due to the lack of open-source preprocessing tools such as NLTK BIBREF5 for English. Therefore, we design a preprocessing pipeline depicted in Figure FIGREF22 for the filtration of unwanted data and vocabulary of other languages such as English to prepare input for word embeddings. Whereas, the involved preprocessing steps are described in detail below the Figure FIGREF22. Moreover, we reveal the list of Sindhi stop words BIBREF38 which is labor intensive and requires human judgment as well. Hence, the most frequent and least important words are classified as stop words with the help of a Sindhi linguistic expert. The partial list of Sindhi stop words is given in TABREF61. We use python programming language for designing the preprocessing pipeline using regex and string functions. Input: The collected text documents were concatenated for the input in UTF-8 format. Replacement symbols: The punctuation marks of a full stop, hyphen, apostrophe, comma, quotation, and exclamation marks replaced with white space for authentic tokenization because without replacing these symbols with white space the words were found joined with their next or previous corresponding words. Filtration of noisy data: The text acquisition from web resources contain a huge amount of noisy data. Therefore, we filtered out unimportant data such as the rest of the punctuation marks, special characters, HTML tags, all types of numeric entities, email, and web addresses. Normalization: In this step, We tokenize the corpus then normalize to lower-case for the filtration of multiple white spaces, English vocabulary, and duplicate words. The stop words were only filtered out for preparing input for GloVe. However, the sub-sampling approach in CBoW and SG can discard most frequent or stop words automatically. Methodology ::: Word embedding models The NN based approaches have produced state-of-the-art performance in NLP with the usage of robust word embedings generated from the large unlabelled corpus. Therefore, word embeddings have become the main component for setting up new benchmarks in NLP using deep learning approaches. Most recently, the use cases of word embeddings are not only limited to boost statistical NLP applications but can also be used to develop language resources such as automatic construction of WordNet BIBREF39 using the unsupervised approach. The word embedding can be precisely defined as the encoding of vocabulary $V$ into $N$ and the word $w$ from $V$ to vector $\overrightarrow{w} $ into $N$-dimensional embedding space. They can be broadly categorized into predictive and count based methods, being generated by employing co-occurrence statistics, NN algorithms, and probabilistic models. The GloVe BIBREF26 algorithm treats each word as a single entity in the corpus and generates a vector of each word. However, CBoW and SG BIBREF27 BIBREF20, later extended BIBREF33 BIBREF24, well-known as word2vec rely on simple two layered NN architecture which uses linear activation function in hidden layer and softmax in the output layer. The work2vec model treats each word as a bag-of-character n-gram. Methodology ::: GloVe The GloVe is a log-bilinear regression model BIBREF26 which combines two methods of local context window and global matrix factorization for training word embeddings of a given vocabulary in an unsupervised way. It weights the contexts using the harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. The Glove’s implementation represents word $w \in V_{w}$ and context $c \in V_{c}$ in $D$-dimensional vectors $\overrightarrow{w}$ and $\overrightarrow{c}$ in a following way, Where, $b^{\overrightarrow{w}}$ is row vector $\left|V_{w}\right|$ and $b^{\overrightarrow{c}}$ is $\left|V_{c}\right|$ is column vector. Methodology ::: Continuous bag-of-words The standard CBoW is the inverse of SG BIBREF27 model, which predicts input word on behalf of the context. The length of input in the CBoW model depends on the setting of context window size which determines the distance to the left and right of the target word. Hence the context is a window that contain neighboring words such as by giving $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ a sequence of words $T$, the objective of the CBoW is to maximize the probability of given neighboring words such as, Where, $c_{t}$ is context of $t^{\text{th}}$ word for example with window $w_{t-c}, \ldots w_{t-1}, w_{t+1}, \ldots w_{t+c}$ of size $2 c$. Methodology ::: Skip gram The SG model predicts surrounding words by giving input word BIBREF20 with training objective of learning good word embeddings that efficiently predict the neighboring words. The goal of skip-gram is to maximize average log-probability of words $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ across the entire training corpus, Where, $c_{t}$ denotes the context of words indices set of nearby $w_{t}$ words in the training corpus. Methodology ::: Hyperparameters ::: Sub-sampling Th sub-sampling BIBREF20 approach is useful to dilute most frequent or stop words, also accelerates learning rate, and increases accuracy for learning rare word vectors. Numerous words in English, e.g., ‘the’, ‘you’, ’that’ do not have more importance, but these words appear very frequently in the text. However, considering all the words equally would also lead to over-fitting problem of model parameters BIBREF24 on the frequent word embeddings and under-fitting on the rest. Therefore, it is useful to count the imbalance between rare and repeated words. The sub-sampling technique randomly removes most frequent words with some threshold $t$ and probability $p$ of words and frequency $f$ of words in the corpus. Where each word$w_{i}$ is discarded with computed probability in training phase, $f(w_i )$ is frequency of word $w_{i}$ and $t>0$ are parameters. Methodology ::: Hyperparameters ::: Dynamic context window The traditional word embedding models usually use a fixed size of a context window. For instance, if the window size ws=6, then the target word apart from 6 tokens will be treated similarity as the next word. The scheme is used to assign more weight to closer words, as closer words are generally considered to be more important to the meaning of the target word. The CBoW, SG and GloVe models employ this weighting scheme. The GloVe model weights the contexts using a harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. However, CBoW and SG implementation equally consider the contexts by dividing the ws with the distance from target word, e.g. ws=6 will weigh its context by $\frac{6}{6} \frac{5}{6} \frac{4}{6} \frac{3}{6} \frac{2}{6} \frac{1}{6}$. Methodology ::: Hyperparameters ::: Sub-word model The sub-word model BIBREF24 can learn the internal structure of words by sharing the character representations across words. In that way, the vector for each word is made of the sum of those character $n-gram$. Such as, a vector of a word “table” is a sum of $n-gram$ vectors by setting the letter $n-gram$ size $min=3$ to $max=6$ as, $<ta, tab, tabl, table, table>, abl, able, able>, ble, ble>, le>$, we can get all sub-words of "table" with minimum length of $minn=3$ and maximum length of $maxn=6$. The $<$ and $>$ symbols are used to separate prefix and suffix words from other character sequences. In this way, the sub-word model utilizes the principles of morphology, which improves the quality of infrequent word representations. In addition to character $n-grams$, the input word $w$ is also included in the set of character $n-gram$, to learn the representation of each word. We obtain scoring function using a input dictionary of $n-grams$ with size $K$ by giving word $w$ , where $K_{w} \subset \lbrace 1, \ldots , K\rbrace $. A word representation $Z_{k}$ is associated to each $n-gram$ $Z$. Hence, each word is represented by the sum of character $n-gram$ representations, where, $s$ is the scoring function in the following equation, Methodology ::: Hyperparameters ::: Position-dependent weights The position-dependent weighting approach BIBREF40 is used to avoid direct encoding of representations for words and their positions which can lead to over-fitting problem. The approach learns positional representations in contextual word representations and used to reweight word embedding. Thus, it captures good contextual representations at lower computational cost, Where, $p$ is individual position in context window associated with $d_{p}$ vector. Afterwards the context vector reweighted by their positional vectors is average of context words. The relative positional set is $P$ in context window and $v_{C}$ is context vector of $w_{t}$ respectively. Methodology ::: Hyperparameters ::: Shifted point-wise mutual information The use sparse Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41 word-context matrix in learning word representations improves results on two word similarity tasks. The CBoW and SG have $k$ (number of negatives) BIBREF27 BIBREF20 hyperparameter, which affects the value that both models try to optimize for each $(w, c): P M I(w, c)-\log k$. Parameter $k$ has two functions of better estimation of negative examples, and it performs as before observing the probability of positive examples (actual occurrence of $w,c$). Methodology ::: Hyperparameters ::: Deleting rare words Before creating a context window, the automatic deletion of rare words also leads to performance gain in CBoW, SG and GloVe models, which further increases the actual size of context windows. Methodology ::: Evaluation methods The intrinsic evaluation is based on semantic similarity BIBREF23 in word embeddings. The word similarity measure approach states BIBREF35 that the words are similar if they appear in the similar context. We measure word similarity of proposed Sindhi word embeddings using dot product method and WordSim353. Methodology ::: Evaluation methods ::: Cosine similarity The cosine similarity between two non-zero vectors is a popular measure that calculates the cosine of the angle between them which can be derived by using the Euclidean dot product method. The dot product is a multiplication of each component from both vectors added together. The result of a dot product between two vectors isn’t another vector but a single value or a scalar. The dot product for two vectors can be defined as: $\overrightarrow{a}=\left(a_{1}, a_{2}, a_{3}, \dots , a_{n}\right)$ and $\overrightarrow{b}=\left({b}_{1}, {b}_{2}, {b}_{3}, \ldots , {b}_{n}\right)$ where $a_{n}$ and $b_{n}$ are the components of the vector and $n$ is dimension of vectors such as, However, the cosine of two non-zero vectors can be derived by using the Euclidean dot product formula, Given $a_{i}$ two vectors of attributes $a$ and $b$, the cosine similarity, $\cos ({\theta })$, is represented using a dot product and magnitude as, where $a_{i}$ and $b_{i}$ are components of vector $\overrightarrow{a}$ and $\overrightarrow{b}$, respectively. Methodology ::: Evaluation methods ::: WordSim353 The WordSim353 BIBREF42 is popular for the evaluation of lexical similarity and relatedness. The similarity score is assigned with 13 to 16 human subjects with semantic relations BIBREF30 for 353 English noun pairs. Due to the lack of annotated datasets in the Sindhi language, we translated WordSim353 using English to Sindhi bilingual dictionary for the evaluation of our proposed Sindhi word embeddings and SdfastText. We use the Spearman correlation coefficient for the semantic and syntactic similarity comparison which is used to used to discover the strength of linear or nonlinear relationships if there are no repeated data values. A perfect Spearman’s correlation of $+1$ or $-1$ discovers the strength of a link between two sets of data (word-pairs) when observations are monotonically increasing or decreasing functions of each other in a following way, where $r_s$ is the rank correlation coefficient, $n$ denote the number of observations, and $d^i$ is the rank difference between $i^{th}$ observations. Statistical analysis of corpus The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens. Statistical analysis of corpus ::: Letter occurrences The frequency of letter occurrences in human language is not arbitrarily organized but follow some specific rules which enable us to describe some linguistic regularities. The Zipf’s law BIBREF43 suggests that if the frequency of letter or word occurrence ranked in descending order such as, Where, $F_{r}$ is the letter frequency of rth rank, $a$ and $b$ are parameters of input text. The comparative letter frequency in the corpus is the total number of occurrences of a letter divided by the total number of letters present in the corpus. The letter frequencies in our developed corpus are depicted in Figure FIGREF55; however, the corpus contains 187,620,276 total number of the character set. Sindhi Persian-Arabic alphabet consists of 52 letters but in the vocabulary 59 letters are detected, additional seven letters are modified uni-grams and standalone honorific symbols. Statistical analysis of corpus ::: Letter n-grams frequency We denote the combination of letter occurrences in a word as n-grams, where each letter is a gram in a word. The letter n-gram frequency is carefully analyzed in order to find the length of words which is essential to develop NLP systems, including learning of word embeddings such as choosing the minimum or maximum length of sub-word for character-level representation learning BIBREF24. We calculate the letter n-grams in words along with their percentage in the developed corpus (see Table TABREF57). The bi-gram words are most frequent, mostly consists of stop words and secondly, 4-gram words have a higher frequency. Statistical analysis of corpus ::: Word Frequencies The word frequency count is an observation of word occurrences in the text. The commonly used words are considered to be with higher frequency, such as the word “the" in English. Similarly, the frequency of rarely used words to be lower. Such frequencies can be calculated at character or word-level. We calculate word frequencies by counting a word $w$ occurrence in the corpus $c$, such as, Where the frequency of $w$ is the sum of every occurrence $k$ of $w$ in $c$. Statistical analysis of corpus ::: Stop words The most frequent and least important words in NLP are often classified as stop words. The removal of such words can boost the performance of the NLP model BIBREF38, such as sentiment analysis and text classification. But the construction of such words list is time consuming and requires user decisions. Firstly, we determined Sindhi stop words by counting their term frequencies using Eq. DISPLAY_FORM59, and secondly, by analysing their grammatical status with the help of Sindhi linguistic expert because all the frequent words are not stop words (see Figure FIGREF62). After determining the importance of such words with the help of human judgment, we placed them in the list of stop words. The total number of detected stop words is 340 in our developed corpus. The partial list of most frequent Sindhi stop words is depicted in Table TABREF61 along with their frequency. The filtration of stop words is an essential preprocessing step for learning GloVe BIBREF26 word embeddings; therefore, we filtered out stop words for preparing input for the GloVe model. However, the sub-sampling approach BIBREF33 BIBREF24 is used to discard such most frequent words in CBoW and SG models. Experiments and results Hyperparameter optimization BIBREF23is more important than designing a novel algorithm. We carefully choose to optimize the dictionary and algorithm-based parameters of CBoW, SG and GloVe algorithms. Hence, we conducted a large number of experiments for training and evaluation until the optimization of most suitable hyperparameters depicted in Table TABREF64 and discussed in Section SECREF63. The choice of optimized hyperparameters is based on The high cosine similarity score in retrieving nearest neighboring words, the semantic, syntactic similarity between word pairs, WordSim353, and visualization of the distance between twenty nearest neighbours using t-SNE respectively. All the experiments are conducted on GTX 1080-TITAN GPU. Experiments and results ::: Hyperparameter optimization The state-of-the-art SG, CBoW BIBREF27 BIBREF33 BIBREF20 BIBREF24 and Glove BIBREF26 word embedding algorithms are evaluated by parameter tuning for development of Sindhi word embeddings. These parameters can be categories into dictionary and algorithm based, respectively. The integration of character n-gram in learning word representations is an ideal method especially for rich morphological languages because this approach has the ability to compute rare and misspelled words. Sindhi is also a rich morphological language. Therefore more robust embeddings became possible to train with the hyperparameter optimization of SG, CBoW and GloVe algorithms. We tuned and evaluated the hyperparameters of three algorithms individually which are discussed as follows: Number of Epochs: Generally, more epochs on the corpus often produce better results but more epochs take long training time. Therefore, we evaluate 10, 20, 30 and 40 epochs for each word embedding model, and 40 epochs constantly produce good results. Learning rate (lr): We tried lr of $0.05$, $0.1$, and $0.25$, the optimal lr $(0.25)$ gives the better results for training all the embedding models. Dimensions ($D$): We evaluate and compare the quality of $100-D$, $200-D$, and $300-D$ using WordSim353 on different $ws$, and the optimal $300-D$ are evaluated with cosine similarity matrix for querying nearest neighboring words and calculating the similarity between word pairs. The embedding dimensions have little affect on the quality of the intrinsic evaluation process. However, the selection of embedding dimensions might have more impact on the accuracy in certain downstream NLP applications. The lower embedding dimensions are faster to train and evaluate. Character n-grams: The selection of minimum (minn) and the maximum (maxn) length of character $n-grams$ is an important parameter for learning character-level representations of words in CBoW and SG models. Therefore, the n-grams from $3-9$ were tested to analyse the impact on the accuracy of embedding. We optimized the length of character n-grams from $minn=2$ and $maxn=7$ by keeping in view the word frequencies depicted in Table TABREF57. Window size (ws): The large ws means considering more context words and similarly less ws means to limit the size of context words. By changing the size of the dynamic context window, we tried the ws of 3, 5, 7 the optimal ws=7 yield consistently better performance. Negative Sampling (NS): : The more negative examples yield better results, but more negatives take long training time. We tried 10, 20, and 30 negative examples for CBoW and SG. The best negative examples of 20 for CBoW and SG significantly yield better performance in average training time. Minimum word count (minw): We evaluated the range of minimum word counts from 1 to 8 and analyzed that the size of input vocabulary is decreasing at a large scale by ignoring more words similarly the vocabulary size was increasing by considering rare words. Therefore, by ignoring words with a frequency of less than 4 in CBoW, SG, and GloVe consistently yields better results with the vocabulary of 200,000 words. Loss function (ls): we use hierarchical softmax (hs) for CBoW, negative sampling (ns) for SG and default loss function for GloVe BIBREF26. The recommended verbosity level, number of buckets, sampling threshold, number of threads are used for training CBoW, SG BIBREF24, and GloVe BIBREF26. Word similarity comparison of Word Embeddings ::: Nearest neighboring words The cosine similarity matrix BIBREF35 is a popular approach to compute the relationship between all embedding dimensions of their distinct relevance to query word. The words with similar context get high cosine similarity and geometrical relatedness to Euclidean distance, which is a common and primary method to measure the distance between a set of words and nearest neighbors. Each word contains the most similar top eight nearest neighboring words determined by the highest cosine similarity score using Eq. DISPLAY_FORM48. We present the English translation of both query and retrieved words also discuss with their English meaning for ease of relevance judgment between the query and retrieved words.To take a closer look at the semantic and syntactic relationship captured in the proposed word embeddings, Table TABREF74 shows the top eight nearest neighboring words of five different query words Friday, Spring, Cricket, Red, Scientist taken from the vocabulary. As the first query word Friday returns the names of days Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday in an unordered sequence. The SdfastText returns five names of days Sunday, Thursday, Monday, Tuesday and Wednesday respectively. The GloVe model also returns five names of days. However, CBoW and SG gave six names of days except Wednesday along with different writing forms of query word Friday being written in the Sindhi language which shows that CBoW and SG return more relevant words as compare to SdfastText and GloVe. The CBoW returned Add and GloVe returns Honorary words which are little similar to the querry word but SdfastText resulted two irrelevant words Kameeso (N) which is a name (N) of person in Sindhi and Phrase is a combination of three Sindhi words which are not tokenized properly. Similarly, nearest neighbors of second query word Spring are retrieved accurately as names and seasons and semantically related to query word Spring by CBoW, SG and Glove but SdfastText returned four irrelevant words of Dilbahar (N), Pharase, Ashbahar (N) and Farzana (N) out of eight. The third query word is Cricket, the name of a popular game. The first retrieved word in CBoW is Kabadi (N) that is a popular national game in Pakistan. Including Kabadi (N) all the returned words by CBoW, SG and GloVe are related to Cricket game or names of other games. But the first word in SdfastText contains a punctuation mark in retrieved word Gone.Cricket that are two words joined with a punctuation mark (.), which shows the tokenization error in preprocessing step, sixth retrieved word Misspelled is a combination of three words not related to query word, and Played, Being played are also irrelevant and stop words. Moreover, fourth query word Red gave results that contain names of closely related to query word and different forms of query word written in the Sindhi language. The last returned word Unknown by SdfastText is irrelevant and not found in the Sindhi dictionary for translation. The last query word Scientist also contains semantically related words by CBoW, SG, and GloVe, but the first Urdu word given by SdfasText belongs to the Urdu language which means that the vocabulary may also contain words of other languages. Another unknown word returned by SdfastText does not have any meaning in the Sindhi dictionary. More interesting observations in the presented results are the diacritized words retrieved from our proposed word embeddings and The authentic tokenization in the preprocessing step presented in Figure FIGREF22. However, SdfastText has returned tri-gram words of Phrase in query words Friday, Spring, a Misspelled word in Cricket and Scientist query words. Hence, the overall performance of our proposed SG, CBoW, and GloVe demonstrate high semantic relatedness in retrieving the top eight nearest neighbor words. Word similarity comparison of Word Embeddings ::: Word pair relationship Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings. Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able. Word similarity comparison of Word Embeddings ::: Comparison with WordSim353 We evaluate the performance of our proposed word embeddings using the WordSim353 dataset by translation English word pairs to Sindhi. Due to vocabulary differences between English and Sindhi, we were unable to find the authentic meaning of six terms, so we left these terms untranslated. So our final Sindhi WordSim353 consists of 347 word pairs. Table TABREF80 shows the Spearman correlation results using Eq. DISPLAY_FORM51 on different dimensional embeddings on the translated WordSim353. The Table TABREF80 presents complete results with the different ws for CBoW, SG and GloVe in which the ws=7 subsequently yield better performance than ws of 3 and 5, respectively. The SG model outperforms CBoW and GloVe in semantic and syntactic similarity by achieving the performance of 0.629 with ws=7. In comparison with English BIBREF27 achieved the average semantic and syntactic similarity of 0.637, 0.656 with CBoW and SG, respectively. Therefore, despite the challenges in translation from English to Sindhi, our proposed Sindhi word embeddings have efficiently captured the semantic and syntactic relationship. Word similarity comparison of Word Embeddings ::: Visualization We use t-Distributed Stochastic Neighboring (t-SNE) dimensionality BIBREF36 reduction algorithm with PCA BIBREF37 for exploratory embeddings analysis in 2-dimensional map. The t-SNE is a non-linear dimensionality reduction algorithm for visualization of high dimensional datasets. It starts the probability calculation of similar word clusters in high-dimensional space and calculates the probability of similar points in the corresponding low-dimensional space. The purpose of t-SNE for visualization of word embeddings is to keep similar words close together in 2-dimensional $x,y$ coordinate pairs while maximizing the distance between dissimilar words. The t-SNE has a perplexity (PPL) tunable parameter used to balance the data points at both the local and global levels. We visualize the embeddings using PPL=20 on 5000-iterations of 300-D models. We use the same query words (see Table TABREF74) by retrieving the top 20 nearest neighboring word clusters for a better understanding of the distance between similar words. Every query word has a distinct color for the clear visualization of a similar group of words. The closer word clusters show the high similarity between the query and retrieved word clusters. The word clusters in SG (see Fig. FIGREF83) are closer to their group of semantically related words. Secondly, the CBoW model depicted in Fig. FIGREF82 and GloVe Fig. FIGREF84 also show the better cluster formation of words than SdfastText Fig. FIGREF85, respectively. Discussion and future work In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet. Conclusion In this paper, we mainly present three novel contributions of large corpus development contains large vocabulary of more than 61 million tokens, 908,456 unique words. Secondly, the list of Sindhi stop words is constructed by finding their high frequency and least importance with the help of Sindhi linguistic expert. Thirdly, the unsupervised Sindhi word embeddings are generated using state-of-the-art CBoW, SG and GloVe algorithms and evaluated using popular intrinsic evaluation approaches of cosine similarity matrix and WordSim353 for the first time in Sindhi language processing. We translate English WordSim353 using the English-Sindhi bilingual dictionary, which will also be a good resource for the evaluation of Sindhi word embeddings. Moreover, the proposed word embeddings are also compared with recently revealed SdfastText word representations. Our empirical results demonstrate that our proposed Sindhi word embeddings have captured high semantic relatedness in nearest neighboring words, word pair relationship, country, and capital and WordSim353. The SG yields the best performance than CBoW and GloVe models subsequently. However, the performance of GloVe is low on the same vocabulary because of character-level learning of word representations and sub-sampling approaches in SG and CBoW. Our proposed Sindhi word embeddings have surpassed SdfastText in the intrinsic evaluation matrix. Also, the vocabulary of SdfastText is limited because they are trained on a small Wikipedia corpus of Sindhi Persian-Arabic. We will further investigate the extrinsic performance of proposed word embeddings on the Sindhi text classification task in the future. The proposed resources along with systematic evaluation will be a sophisticated addition to the computational resources for statistical Sindhi language processing.
No
a1064307a19cd7add32163a70b6623278a557946
a1064307a19cd7add32163a70b6623278a557946_0
Q: How many uniue words are in the dataset? Text: Introduction Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure BIBREF2 due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people BIBREF1. The research on SNLP was coined in 2002, however, IT grabbed research attention after the development of its Unicode system BIBREF3. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources. The Language Resources (LRs) are fundamental elements for the development of high quality NLP systems based on automatic or NN based approaches. The LRs include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. The development of such resources has received great research interest for the digitization of human languages BIBREF4. Many world languages are rich in such language processing resources integrated in their software tools including English BIBREF5 BIBREF6, Chinese BIBREF7 and other languages BIBREF8 BIBREF9. The Sindhi language lacks the basic computational resources BIBREF10 of a large text corpus, which can be utilized for training robust word embeddings and developing language independent NLP applications including semantic analysis, sentiment analysis, parts of the speech tagging, named entity recognition, machine translation BIBREF11, multitasking BIBREF12, BIBREF13. Presently Sindhi Persian-Arabic is frequently used for online communication, newspapers, public institutions in Pakistan, and India BIBREF1. But little work has been carried out for the development of LRs such as raw corpus BIBREF14, BIBREF15, annotated corpus BIBREF16, BIBREF17, BIBREF1, BIBREF18. In the best of our knowledge, Sindhi lacks the large unlabelled corpus which can be utilized for generating and evaluating word embeddings for Statistical Sindhi Language Processing (SSLP). One way to to break out this loop is to learn word embeddings from unlabelled corpora, which can be utilized to bootstrap other downstream NLP tasks. The word embedding is a new term of semantic vector space BIBREF19, distributed representations BIBREF20, and distributed semantic models. It is a language modeling approach BIBREF21 used for the mapping of words and phrases into $n$-dimensional dense vectors of real numbers that effectively capture the semantic and syntactic relationship with neighboring words in a geometric way BIBREF22 BIBREF23. Such as “Einstein” and “Scientist” would have greater similarity compared with “Einstein” and “doctor.” In this way, word embeddings accomplish the important linguistic concept of “a word is characterized by the company it keeps". More recently NN based models yield state-of-the-art performance in multiple NLP tasks BIBREF24 BIBREF25 with the word embeddings. One of the advantages of such techniques is they use unsupervised approaches for learning representations and do not require annotated corpus which is rare for low-resourced Sindhi language. Such representions can be trained on large unannotated corpora, and then generated representations can be used in the NLP tasks which uses a small amount of labelled data. In this paper, we address the problems of corpus construction by collecting a large corpus of more than 61 million words from multiple web resources using the web-scrappy framework. After the collection of the corpus, we carefully preprocessed for the filtration of noisy text, e.g., the HTML tags and vocabulary of the English language. The statistical analysis is also presented for the letter, word frequencies and identification of stop-words. Finally, the corpus is utilized to generate Sindhi word embeddings using state-of-the-art GloVe BIBREF26 SG and CBoW BIBREF27 BIBREF20 BIBREF24 algorithms. The popular intrinsic evaluation method BIBREF20 BIBREF28 BIBREF29 of calculating cosine similarity between word vectors and WordSim353 BIBREF30 are employed to measure the performance of the learned Sindhi word embeddings. We translated English WordSim353 word pairs into Sindhi using bilingual English to Sindhi dictionary. The intrinsic approach typically involves a pre-selected set of query terms BIBREF23 and semantically related target words, which we refer to as query words. Furthermore, we also compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) BIBREF25 word representations. To the best of our knowledge, this is the first comprehensive work on the development of large corpus and generating word embeddings along with systematic evaluation for low-resourced Sindhi Persian-Arabic. The synopsis of our novel contributions is listed as follows: We present a large corpus of more than 61 million words obtained from multiple web resources and reveal a list of Sindhi stop words. We develop a text cleaning pipeline for the preprocessing of the raw corpus. Generate word embeddings using GloVe, CBoW, and SG Word2Vec algorithms also evaluate and compare them using the intrinsic evaluation approaches of cosine similarity matrix and WordSim353. We are the first to evaluate SdfastText word representations and compare them with our proposed Sindhi word embeddings. The remaining sections of the paper are organized as; Section SECREF2 presents the literature survey regarding computational resources, Sindhi corpus construction, and word embedding models. Afterwards, Section SECREF3 presents the employed methodology, Section SECREF4 consist of statistical analysis of the developed corpus. Section SECREF5 present the experimental setup. The intrinsic evaluation results along with comparison are given in Section SECREF6. The discussion and future work are given in Section SECREF7, and lastly, Section SECREF8 presents the conclusion. Related work The natural language resources refer to a set of language data and descriptions BIBREF31 in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English BIBREF5, Stanford CoreNLP BIBREF6, LTP for Chinese BIBREF7, TectoMT for German, Russian, Arabic BIBREF8 and multilingual toolkit BIBREF9. But Sindhi language is at an early stage for the development of such resources and software tools. The corpus construction for NLP mainly involves important steps of acquisition, preprocessing, and tokenization. Initially, BIBREF14 discussed the morphological structure and challenges concerned with the corpus development along with orthographical and morphological features in the Persian-Arabic script. The raw and annotated corpus BIBREF1 for Sindhi Persian-Arabic is a good supplement towards the development of resources, including raw and annotated datasets for parts of speech tagging, morphological analysis, transliteration between Sindhi Persian-Arabic and Sindhi-Devanagari, and machine translation system. But the corpus is acquired only form Wikipedia-dumps. A survey-based study BIBREF4 provides all the progress made in the Sindhi Natural Language Processing (SNLP) with the complete gist of adopted techniques, developed tools and available resources which show that work on resource development on Sindhi needs more sophisticated efforts. The raw corpus is utilized for word segmentation BIBREF32 of Sindhi Persian-Arabic. More recently, an initiative towards the development of resources is taken BIBREF16 by open sourcing annotated dataset of Sindhi Persian-Arabic obtained from news and social blogs. The existing and proposed work is presented in Table TABREF9 on the corpus development, word segmentation, and word embeddings, respectively. The power of word embeddings in NLP was empirically estimated by proposing a neural language model BIBREF21 and multitask learning BIBREF12, but recently usage of word embeddings in deep neural algorithms has become integral element BIBREF33 for performance acceleration in deep NLP applications. The CBoW and SG BIBREF27 BIBREF20 popular word2vec neural architectures yielded high quality vector representations in lower computational cost with integration of character-level learning on large corpora in terms of semantic and syntactic word similarity later extended BIBREF33 BIBREF24. Both approaches produce state-of-the-art accuracy with fast training performance, better representations of less frequent words and efficient representation of phrases as well. BIBREF34 proposed NN based approach for generating morphemic-level word embeddings, which surpassed all the existing embedding models in intrinsic evaluation. A count-based GloVe model BIBREF26 also yielded state-of-the-art results in an intrinsic evaluation and downstream NLP tasks. The performance of Word embeddings is evaluated using intrinsic BIBREF23 BIBREF29 and extrinsic evaluation BIBREF28 methods. The performance of word embeddings can be measured with intrinsic and extrinsic evaluation approaches. The intrinsic approach is used to measure the internal quality of word embeddings such as querying nearest neighboring words and calculating the semantic or syntactic similarity between similar word pairs. A method of direct comparison for intrinsic evaluation of word embeddings measures the neighborhood of a query word in vector space. The key advantage of that method is to reduce bias and create insight to find data-driven relevance judgment. An extrinsic evaluation approach is used to evaluate the performance in downstream NLP tasks, such as parts-of-speech tagging or named-entity recognition BIBREF23, but the Sindhi language lacks annotated corpus for such type of evaluation. Moreover, extrinsic evaluation is time consuming and difficult to interpret. Therefore, we opt intrinsic evaluation method BIBREF28 to get a quick insight into the quality of proposed Sindhi word embeddings by measuring the cosine distance between similar words and using WordSim353 dataset. A study reveals that the choice of optimized hyper-parameters BIBREF35 has a great impact on the quality of pretrained word embeddings as compare to desing a novel algorithm. Therefore, we optimized the hyperparameters for generating robust Sindhi word embeddings using CBoW, SG and GloVe models. The embedding visualization is also useful to visualize the similarity of word clusters. Therefore, we use t-SNE BIBREF36 dimensionality reduction algorithm for compressing high dimensional embedding into 2-dimensional $x$,$y$ coordinate pairs with PCA BIBREF37. The PCA is useful to combine input features by dropping the least important features while retaining the most valuable features. Methodology This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings. Methodology ::: Task description We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization. Methodology ::: Corpus acquisition The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter. Methodology ::: Preprocessing The preprocessing of text corpus obtained from multiple web resources is a challenging task specially it becomes more complicated when working on low-resourced language like Sindhi due to the lack of open-source preprocessing tools such as NLTK BIBREF5 for English. Therefore, we design a preprocessing pipeline depicted in Figure FIGREF22 for the filtration of unwanted data and vocabulary of other languages such as English to prepare input for word embeddings. Whereas, the involved preprocessing steps are described in detail below the Figure FIGREF22. Moreover, we reveal the list of Sindhi stop words BIBREF38 which is labor intensive and requires human judgment as well. Hence, the most frequent and least important words are classified as stop words with the help of a Sindhi linguistic expert. The partial list of Sindhi stop words is given in TABREF61. We use python programming language for designing the preprocessing pipeline using regex and string functions. Input: The collected text documents were concatenated for the input in UTF-8 format. Replacement symbols: The punctuation marks of a full stop, hyphen, apostrophe, comma, quotation, and exclamation marks replaced with white space for authentic tokenization because without replacing these symbols with white space the words were found joined with their next or previous corresponding words. Filtration of noisy data: The text acquisition from web resources contain a huge amount of noisy data. Therefore, we filtered out unimportant data such as the rest of the punctuation marks, special characters, HTML tags, all types of numeric entities, email, and web addresses. Normalization: In this step, We tokenize the corpus then normalize to lower-case for the filtration of multiple white spaces, English vocabulary, and duplicate words. The stop words were only filtered out for preparing input for GloVe. However, the sub-sampling approach in CBoW and SG can discard most frequent or stop words automatically. Methodology ::: Word embedding models The NN based approaches have produced state-of-the-art performance in NLP with the usage of robust word embedings generated from the large unlabelled corpus. Therefore, word embeddings have become the main component for setting up new benchmarks in NLP using deep learning approaches. Most recently, the use cases of word embeddings are not only limited to boost statistical NLP applications but can also be used to develop language resources such as automatic construction of WordNet BIBREF39 using the unsupervised approach. The word embedding can be precisely defined as the encoding of vocabulary $V$ into $N$ and the word $w$ from $V$ to vector $\overrightarrow{w} $ into $N$-dimensional embedding space. They can be broadly categorized into predictive and count based methods, being generated by employing co-occurrence statistics, NN algorithms, and probabilistic models. The GloVe BIBREF26 algorithm treats each word as a single entity in the corpus and generates a vector of each word. However, CBoW and SG BIBREF27 BIBREF20, later extended BIBREF33 BIBREF24, well-known as word2vec rely on simple two layered NN architecture which uses linear activation function in hidden layer and softmax in the output layer. The work2vec model treats each word as a bag-of-character n-gram. Methodology ::: GloVe The GloVe is a log-bilinear regression model BIBREF26 which combines two methods of local context window and global matrix factorization for training word embeddings of a given vocabulary in an unsupervised way. It weights the contexts using the harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. The Glove’s implementation represents word $w \in V_{w}$ and context $c \in V_{c}$ in $D$-dimensional vectors $\overrightarrow{w}$ and $\overrightarrow{c}$ in a following way, Where, $b^{\overrightarrow{w}}$ is row vector $\left|V_{w}\right|$ and $b^{\overrightarrow{c}}$ is $\left|V_{c}\right|$ is column vector. Methodology ::: Continuous bag-of-words The standard CBoW is the inverse of SG BIBREF27 model, which predicts input word on behalf of the context. The length of input in the CBoW model depends on the setting of context window size which determines the distance to the left and right of the target word. Hence the context is a window that contain neighboring words such as by giving $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ a sequence of words $T$, the objective of the CBoW is to maximize the probability of given neighboring words such as, Where, $c_{t}$ is context of $t^{\text{th}}$ word for example with window $w_{t-c}, \ldots w_{t-1}, w_{t+1}, \ldots w_{t+c}$ of size $2 c$. Methodology ::: Skip gram The SG model predicts surrounding words by giving input word BIBREF20 with training objective of learning good word embeddings that efficiently predict the neighboring words. The goal of skip-gram is to maximize average log-probability of words $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ across the entire training corpus, Where, $c_{t}$ denotes the context of words indices set of nearby $w_{t}$ words in the training corpus. Methodology ::: Hyperparameters ::: Sub-sampling Th sub-sampling BIBREF20 approach is useful to dilute most frequent or stop words, also accelerates learning rate, and increases accuracy for learning rare word vectors. Numerous words in English, e.g., ‘the’, ‘you’, ’that’ do not have more importance, but these words appear very frequently in the text. However, considering all the words equally would also lead to over-fitting problem of model parameters BIBREF24 on the frequent word embeddings and under-fitting on the rest. Therefore, it is useful to count the imbalance between rare and repeated words. The sub-sampling technique randomly removes most frequent words with some threshold $t$ and probability $p$ of words and frequency $f$ of words in the corpus. Where each word$w_{i}$ is discarded with computed probability in training phase, $f(w_i )$ is frequency of word $w_{i}$ and $t>0$ are parameters. Methodology ::: Hyperparameters ::: Dynamic context window The traditional word embedding models usually use a fixed size of a context window. For instance, if the window size ws=6, then the target word apart from 6 tokens will be treated similarity as the next word. The scheme is used to assign more weight to closer words, as closer words are generally considered to be more important to the meaning of the target word. The CBoW, SG and GloVe models employ this weighting scheme. The GloVe model weights the contexts using a harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. However, CBoW and SG implementation equally consider the contexts by dividing the ws with the distance from target word, e.g. ws=6 will weigh its context by $\frac{6}{6} \frac{5}{6} \frac{4}{6} \frac{3}{6} \frac{2}{6} \frac{1}{6}$. Methodology ::: Hyperparameters ::: Sub-word model The sub-word model BIBREF24 can learn the internal structure of words by sharing the character representations across words. In that way, the vector for each word is made of the sum of those character $n-gram$. Such as, a vector of a word “table” is a sum of $n-gram$ vectors by setting the letter $n-gram$ size $min=3$ to $max=6$ as, $<ta, tab, tabl, table, table>, abl, able, able>, ble, ble>, le>$, we can get all sub-words of "table" with minimum length of $minn=3$ and maximum length of $maxn=6$. The $<$ and $>$ symbols are used to separate prefix and suffix words from other character sequences. In this way, the sub-word model utilizes the principles of morphology, which improves the quality of infrequent word representations. In addition to character $n-grams$, the input word $w$ is also included in the set of character $n-gram$, to learn the representation of each word. We obtain scoring function using a input dictionary of $n-grams$ with size $K$ by giving word $w$ , where $K_{w} \subset \lbrace 1, \ldots , K\rbrace $. A word representation $Z_{k}$ is associated to each $n-gram$ $Z$. Hence, each word is represented by the sum of character $n-gram$ representations, where, $s$ is the scoring function in the following equation, Methodology ::: Hyperparameters ::: Position-dependent weights The position-dependent weighting approach BIBREF40 is used to avoid direct encoding of representations for words and their positions which can lead to over-fitting problem. The approach learns positional representations in contextual word representations and used to reweight word embedding. Thus, it captures good contextual representations at lower computational cost, Where, $p$ is individual position in context window associated with $d_{p}$ vector. Afterwards the context vector reweighted by their positional vectors is average of context words. The relative positional set is $P$ in context window and $v_{C}$ is context vector of $w_{t}$ respectively. Methodology ::: Hyperparameters ::: Shifted point-wise mutual information The use sparse Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41 word-context matrix in learning word representations improves results on two word similarity tasks. The CBoW and SG have $k$ (number of negatives) BIBREF27 BIBREF20 hyperparameter, which affects the value that both models try to optimize for each $(w, c): P M I(w, c)-\log k$. Parameter $k$ has two functions of better estimation of negative examples, and it performs as before observing the probability of positive examples (actual occurrence of $w,c$). Methodology ::: Hyperparameters ::: Deleting rare words Before creating a context window, the automatic deletion of rare words also leads to performance gain in CBoW, SG and GloVe models, which further increases the actual size of context windows. Methodology ::: Evaluation methods The intrinsic evaluation is based on semantic similarity BIBREF23 in word embeddings. The word similarity measure approach states BIBREF35 that the words are similar if they appear in the similar context. We measure word similarity of proposed Sindhi word embeddings using dot product method and WordSim353. Methodology ::: Evaluation methods ::: Cosine similarity The cosine similarity between two non-zero vectors is a popular measure that calculates the cosine of the angle between them which can be derived by using the Euclidean dot product method. The dot product is a multiplication of each component from both vectors added together. The result of a dot product between two vectors isn’t another vector but a single value or a scalar. The dot product for two vectors can be defined as: $\overrightarrow{a}=\left(a_{1}, a_{2}, a_{3}, \dots , a_{n}\right)$ and $\overrightarrow{b}=\left({b}_{1}, {b}_{2}, {b}_{3}, \ldots , {b}_{n}\right)$ where $a_{n}$ and $b_{n}$ are the components of the vector and $n$ is dimension of vectors such as, However, the cosine of two non-zero vectors can be derived by using the Euclidean dot product formula, Given $a_{i}$ two vectors of attributes $a$ and $b$, the cosine similarity, $\cos ({\theta })$, is represented using a dot product and magnitude as, where $a_{i}$ and $b_{i}$ are components of vector $\overrightarrow{a}$ and $\overrightarrow{b}$, respectively. Methodology ::: Evaluation methods ::: WordSim353 The WordSim353 BIBREF42 is popular for the evaluation of lexical similarity and relatedness. The similarity score is assigned with 13 to 16 human subjects with semantic relations BIBREF30 for 353 English noun pairs. Due to the lack of annotated datasets in the Sindhi language, we translated WordSim353 using English to Sindhi bilingual dictionary for the evaluation of our proposed Sindhi word embeddings and SdfastText. We use the Spearman correlation coefficient for the semantic and syntactic similarity comparison which is used to used to discover the strength of linear or nonlinear relationships if there are no repeated data values. A perfect Spearman’s correlation of $+1$ or $-1$ discovers the strength of a link between two sets of data (word-pairs) when observations are monotonically increasing or decreasing functions of each other in a following way, where $r_s$ is the rank correlation coefficient, $n$ denote the number of observations, and $d^i$ is the rank difference between $i^{th}$ observations. Statistical analysis of corpus The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens. Statistical analysis of corpus ::: Letter occurrences The frequency of letter occurrences in human language is not arbitrarily organized but follow some specific rules which enable us to describe some linguistic regularities. The Zipf’s law BIBREF43 suggests that if the frequency of letter or word occurrence ranked in descending order such as, Where, $F_{r}$ is the letter frequency of rth rank, $a$ and $b$ are parameters of input text. The comparative letter frequency in the corpus is the total number of occurrences of a letter divided by the total number of letters present in the corpus. The letter frequencies in our developed corpus are depicted in Figure FIGREF55; however, the corpus contains 187,620,276 total number of the character set. Sindhi Persian-Arabic alphabet consists of 52 letters but in the vocabulary 59 letters are detected, additional seven letters are modified uni-grams and standalone honorific symbols. Statistical analysis of corpus ::: Letter n-grams frequency We denote the combination of letter occurrences in a word as n-grams, where each letter is a gram in a word. The letter n-gram frequency is carefully analyzed in order to find the length of words which is essential to develop NLP systems, including learning of word embeddings such as choosing the minimum or maximum length of sub-word for character-level representation learning BIBREF24. We calculate the letter n-grams in words along with their percentage in the developed corpus (see Table TABREF57). The bi-gram words are most frequent, mostly consists of stop words and secondly, 4-gram words have a higher frequency. Statistical analysis of corpus ::: Word Frequencies The word frequency count is an observation of word occurrences in the text. The commonly used words are considered to be with higher frequency, such as the word “the" in English. Similarly, the frequency of rarely used words to be lower. Such frequencies can be calculated at character or word-level. We calculate word frequencies by counting a word $w$ occurrence in the corpus $c$, such as, Where the frequency of $w$ is the sum of every occurrence $k$ of $w$ in $c$. Statistical analysis of corpus ::: Stop words The most frequent and least important words in NLP are often classified as stop words. The removal of such words can boost the performance of the NLP model BIBREF38, such as sentiment analysis and text classification. But the construction of such words list is time consuming and requires user decisions. Firstly, we determined Sindhi stop words by counting their term frequencies using Eq. DISPLAY_FORM59, and secondly, by analysing their grammatical status with the help of Sindhi linguistic expert because all the frequent words are not stop words (see Figure FIGREF62). After determining the importance of such words with the help of human judgment, we placed them in the list of stop words. The total number of detected stop words is 340 in our developed corpus. The partial list of most frequent Sindhi stop words is depicted in Table TABREF61 along with their frequency. The filtration of stop words is an essential preprocessing step for learning GloVe BIBREF26 word embeddings; therefore, we filtered out stop words for preparing input for the GloVe model. However, the sub-sampling approach BIBREF33 BIBREF24 is used to discard such most frequent words in CBoW and SG models. Experiments and results Hyperparameter optimization BIBREF23is more important than designing a novel algorithm. We carefully choose to optimize the dictionary and algorithm-based parameters of CBoW, SG and GloVe algorithms. Hence, we conducted a large number of experiments for training and evaluation until the optimization of most suitable hyperparameters depicted in Table TABREF64 and discussed in Section SECREF63. The choice of optimized hyperparameters is based on The high cosine similarity score in retrieving nearest neighboring words, the semantic, syntactic similarity between word pairs, WordSim353, and visualization of the distance between twenty nearest neighbours using t-SNE respectively. All the experiments are conducted on GTX 1080-TITAN GPU. Experiments and results ::: Hyperparameter optimization The state-of-the-art SG, CBoW BIBREF27 BIBREF33 BIBREF20 BIBREF24 and Glove BIBREF26 word embedding algorithms are evaluated by parameter tuning for development of Sindhi word embeddings. These parameters can be categories into dictionary and algorithm based, respectively. The integration of character n-gram in learning word representations is an ideal method especially for rich morphological languages because this approach has the ability to compute rare and misspelled words. Sindhi is also a rich morphological language. Therefore more robust embeddings became possible to train with the hyperparameter optimization of SG, CBoW and GloVe algorithms. We tuned and evaluated the hyperparameters of three algorithms individually which are discussed as follows: Number of Epochs: Generally, more epochs on the corpus often produce better results but more epochs take long training time. Therefore, we evaluate 10, 20, 30 and 40 epochs for each word embedding model, and 40 epochs constantly produce good results. Learning rate (lr): We tried lr of $0.05$, $0.1$, and $0.25$, the optimal lr $(0.25)$ gives the better results for training all the embedding models. Dimensions ($D$): We evaluate and compare the quality of $100-D$, $200-D$, and $300-D$ using WordSim353 on different $ws$, and the optimal $300-D$ are evaluated with cosine similarity matrix for querying nearest neighboring words and calculating the similarity between word pairs. The embedding dimensions have little affect on the quality of the intrinsic evaluation process. However, the selection of embedding dimensions might have more impact on the accuracy in certain downstream NLP applications. The lower embedding dimensions are faster to train and evaluate. Character n-grams: The selection of minimum (minn) and the maximum (maxn) length of character $n-grams$ is an important parameter for learning character-level representations of words in CBoW and SG models. Therefore, the n-grams from $3-9$ were tested to analyse the impact on the accuracy of embedding. We optimized the length of character n-grams from $minn=2$ and $maxn=7$ by keeping in view the word frequencies depicted in Table TABREF57. Window size (ws): The large ws means considering more context words and similarly less ws means to limit the size of context words. By changing the size of the dynamic context window, we tried the ws of 3, 5, 7 the optimal ws=7 yield consistently better performance. Negative Sampling (NS): : The more negative examples yield better results, but more negatives take long training time. We tried 10, 20, and 30 negative examples for CBoW and SG. The best negative examples of 20 for CBoW and SG significantly yield better performance in average training time. Minimum word count (minw): We evaluated the range of minimum word counts from 1 to 8 and analyzed that the size of input vocabulary is decreasing at a large scale by ignoring more words similarly the vocabulary size was increasing by considering rare words. Therefore, by ignoring words with a frequency of less than 4 in CBoW, SG, and GloVe consistently yields better results with the vocabulary of 200,000 words. Loss function (ls): we use hierarchical softmax (hs) for CBoW, negative sampling (ns) for SG and default loss function for GloVe BIBREF26. The recommended verbosity level, number of buckets, sampling threshold, number of threads are used for training CBoW, SG BIBREF24, and GloVe BIBREF26. Word similarity comparison of Word Embeddings ::: Nearest neighboring words The cosine similarity matrix BIBREF35 is a popular approach to compute the relationship between all embedding dimensions of their distinct relevance to query word. The words with similar context get high cosine similarity and geometrical relatedness to Euclidean distance, which is a common and primary method to measure the distance between a set of words and nearest neighbors. Each word contains the most similar top eight nearest neighboring words determined by the highest cosine similarity score using Eq. DISPLAY_FORM48. We present the English translation of both query and retrieved words also discuss with their English meaning for ease of relevance judgment between the query and retrieved words.To take a closer look at the semantic and syntactic relationship captured in the proposed word embeddings, Table TABREF74 shows the top eight nearest neighboring words of five different query words Friday, Spring, Cricket, Red, Scientist taken from the vocabulary. As the first query word Friday returns the names of days Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday in an unordered sequence. The SdfastText returns five names of days Sunday, Thursday, Monday, Tuesday and Wednesday respectively. The GloVe model also returns five names of days. However, CBoW and SG gave six names of days except Wednesday along with different writing forms of query word Friday being written in the Sindhi language which shows that CBoW and SG return more relevant words as compare to SdfastText and GloVe. The CBoW returned Add and GloVe returns Honorary words which are little similar to the querry word but SdfastText resulted two irrelevant words Kameeso (N) which is a name (N) of person in Sindhi and Phrase is a combination of three Sindhi words which are not tokenized properly. Similarly, nearest neighbors of second query word Spring are retrieved accurately as names and seasons and semantically related to query word Spring by CBoW, SG and Glove but SdfastText returned four irrelevant words of Dilbahar (N), Pharase, Ashbahar (N) and Farzana (N) out of eight. The third query word is Cricket, the name of a popular game. The first retrieved word in CBoW is Kabadi (N) that is a popular national game in Pakistan. Including Kabadi (N) all the returned words by CBoW, SG and GloVe are related to Cricket game or names of other games. But the first word in SdfastText contains a punctuation mark in retrieved word Gone.Cricket that are two words joined with a punctuation mark (.), which shows the tokenization error in preprocessing step, sixth retrieved word Misspelled is a combination of three words not related to query word, and Played, Being played are also irrelevant and stop words. Moreover, fourth query word Red gave results that contain names of closely related to query word and different forms of query word written in the Sindhi language. The last returned word Unknown by SdfastText is irrelevant and not found in the Sindhi dictionary for translation. The last query word Scientist also contains semantically related words by CBoW, SG, and GloVe, but the first Urdu word given by SdfasText belongs to the Urdu language which means that the vocabulary may also contain words of other languages. Another unknown word returned by SdfastText does not have any meaning in the Sindhi dictionary. More interesting observations in the presented results are the diacritized words retrieved from our proposed word embeddings and The authentic tokenization in the preprocessing step presented in Figure FIGREF22. However, SdfastText has returned tri-gram words of Phrase in query words Friday, Spring, a Misspelled word in Cricket and Scientist query words. Hence, the overall performance of our proposed SG, CBoW, and GloVe demonstrate high semantic relatedness in retrieving the top eight nearest neighbor words. Word similarity comparison of Word Embeddings ::: Word pair relationship Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings. Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able. Word similarity comparison of Word Embeddings ::: Comparison with WordSim353 We evaluate the performance of our proposed word embeddings using the WordSim353 dataset by translation English word pairs to Sindhi. Due to vocabulary differences between English and Sindhi, we were unable to find the authentic meaning of six terms, so we left these terms untranslated. So our final Sindhi WordSim353 consists of 347 word pairs. Table TABREF80 shows the Spearman correlation results using Eq. DISPLAY_FORM51 on different dimensional embeddings on the translated WordSim353. The Table TABREF80 presents complete results with the different ws for CBoW, SG and GloVe in which the ws=7 subsequently yield better performance than ws of 3 and 5, respectively. The SG model outperforms CBoW and GloVe in semantic and syntactic similarity by achieving the performance of 0.629 with ws=7. In comparison with English BIBREF27 achieved the average semantic and syntactic similarity of 0.637, 0.656 with CBoW and SG, respectively. Therefore, despite the challenges in translation from English to Sindhi, our proposed Sindhi word embeddings have efficiently captured the semantic and syntactic relationship. Word similarity comparison of Word Embeddings ::: Visualization We use t-Distributed Stochastic Neighboring (t-SNE) dimensionality BIBREF36 reduction algorithm with PCA BIBREF37 for exploratory embeddings analysis in 2-dimensional map. The t-SNE is a non-linear dimensionality reduction algorithm for visualization of high dimensional datasets. It starts the probability calculation of similar word clusters in high-dimensional space and calculates the probability of similar points in the corresponding low-dimensional space. The purpose of t-SNE for visualization of word embeddings is to keep similar words close together in 2-dimensional $x,y$ coordinate pairs while maximizing the distance between dissimilar words. The t-SNE has a perplexity (PPL) tunable parameter used to balance the data points at both the local and global levels. We visualize the embeddings using PPL=20 on 5000-iterations of 300-D models. We use the same query words (see Table TABREF74) by retrieving the top 20 nearest neighboring word clusters for a better understanding of the distance between similar words. Every query word has a distinct color for the clear visualization of a similar group of words. The closer word clusters show the high similarity between the query and retrieved word clusters. The word clusters in SG (see Fig. FIGREF83) are closer to their group of semantically related words. Secondly, the CBoW model depicted in Fig. FIGREF82 and GloVe Fig. FIGREF84 also show the better cluster formation of words than SdfastText Fig. FIGREF85, respectively. Discussion and future work In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet. Conclusion In this paper, we mainly present three novel contributions of large corpus development contains large vocabulary of more than 61 million tokens, 908,456 unique words. Secondly, the list of Sindhi stop words is constructed by finding their high frequency and least importance with the help of Sindhi linguistic expert. Thirdly, the unsupervised Sindhi word embeddings are generated using state-of-the-art CBoW, SG and GloVe algorithms and evaluated using popular intrinsic evaluation approaches of cosine similarity matrix and WordSim353 for the first time in Sindhi language processing. We translate English WordSim353 using the English-Sindhi bilingual dictionary, which will also be a good resource for the evaluation of Sindhi word embeddings. Moreover, the proposed word embeddings are also compared with recently revealed SdfastText word representations. Our empirical results demonstrate that our proposed Sindhi word embeddings have captured high semantic relatedness in nearest neighboring words, word pair relationship, country, and capital and WordSim353. The SG yields the best performance than CBoW and GloVe models subsequently. However, the performance of GloVe is low on the same vocabulary because of character-level learning of word representations and sub-sampling approaches in SG and CBoW. Our proposed Sindhi word embeddings have surpassed SdfastText in the intrinsic evaluation matrix. Also, the vocabulary of SdfastText is limited because they are trained on a small Wikipedia corpus of Sindhi Persian-Arabic. We will further investigate the extrinsic performance of proposed word embeddings on the Sindhi text classification task in the future. The proposed resources along with systematic evaluation will be a sophisticated addition to the computational resources for statistical Sindhi language processing.
908456 unique words are available in collected corpus.
8cb9006bcbd2f390aadc6b70d54ee98c674e45cc
8cb9006bcbd2f390aadc6b70d54ee98c674e45cc_0
Q: How is the data collected, which web resources were used? Text: Introduction Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure BIBREF2 due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people BIBREF1. The research on SNLP was coined in 2002, however, IT grabbed research attention after the development of its Unicode system BIBREF3. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources. The Language Resources (LRs) are fundamental elements for the development of high quality NLP systems based on automatic or NN based approaches. The LRs include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. The development of such resources has received great research interest for the digitization of human languages BIBREF4. Many world languages are rich in such language processing resources integrated in their software tools including English BIBREF5 BIBREF6, Chinese BIBREF7 and other languages BIBREF8 BIBREF9. The Sindhi language lacks the basic computational resources BIBREF10 of a large text corpus, which can be utilized for training robust word embeddings and developing language independent NLP applications including semantic analysis, sentiment analysis, parts of the speech tagging, named entity recognition, machine translation BIBREF11, multitasking BIBREF12, BIBREF13. Presently Sindhi Persian-Arabic is frequently used for online communication, newspapers, public institutions in Pakistan, and India BIBREF1. But little work has been carried out for the development of LRs such as raw corpus BIBREF14, BIBREF15, annotated corpus BIBREF16, BIBREF17, BIBREF1, BIBREF18. In the best of our knowledge, Sindhi lacks the large unlabelled corpus which can be utilized for generating and evaluating word embeddings for Statistical Sindhi Language Processing (SSLP). One way to to break out this loop is to learn word embeddings from unlabelled corpora, which can be utilized to bootstrap other downstream NLP tasks. The word embedding is a new term of semantic vector space BIBREF19, distributed representations BIBREF20, and distributed semantic models. It is a language modeling approach BIBREF21 used for the mapping of words and phrases into $n$-dimensional dense vectors of real numbers that effectively capture the semantic and syntactic relationship with neighboring words in a geometric way BIBREF22 BIBREF23. Such as “Einstein” and “Scientist” would have greater similarity compared with “Einstein” and “doctor.” In this way, word embeddings accomplish the important linguistic concept of “a word is characterized by the company it keeps". More recently NN based models yield state-of-the-art performance in multiple NLP tasks BIBREF24 BIBREF25 with the word embeddings. One of the advantages of such techniques is they use unsupervised approaches for learning representations and do not require annotated corpus which is rare for low-resourced Sindhi language. Such representions can be trained on large unannotated corpora, and then generated representations can be used in the NLP tasks which uses a small amount of labelled data. In this paper, we address the problems of corpus construction by collecting a large corpus of more than 61 million words from multiple web resources using the web-scrappy framework. After the collection of the corpus, we carefully preprocessed for the filtration of noisy text, e.g., the HTML tags and vocabulary of the English language. The statistical analysis is also presented for the letter, word frequencies and identification of stop-words. Finally, the corpus is utilized to generate Sindhi word embeddings using state-of-the-art GloVe BIBREF26 SG and CBoW BIBREF27 BIBREF20 BIBREF24 algorithms. The popular intrinsic evaluation method BIBREF20 BIBREF28 BIBREF29 of calculating cosine similarity between word vectors and WordSim353 BIBREF30 are employed to measure the performance of the learned Sindhi word embeddings. We translated English WordSim353 word pairs into Sindhi using bilingual English to Sindhi dictionary. The intrinsic approach typically involves a pre-selected set of query terms BIBREF23 and semantically related target words, which we refer to as query words. Furthermore, we also compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) BIBREF25 word representations. To the best of our knowledge, this is the first comprehensive work on the development of large corpus and generating word embeddings along with systematic evaluation for low-resourced Sindhi Persian-Arabic. The synopsis of our novel contributions is listed as follows: We present a large corpus of more than 61 million words obtained from multiple web resources and reveal a list of Sindhi stop words. We develop a text cleaning pipeline for the preprocessing of the raw corpus. Generate word embeddings using GloVe, CBoW, and SG Word2Vec algorithms also evaluate and compare them using the intrinsic evaluation approaches of cosine similarity matrix and WordSim353. We are the first to evaluate SdfastText word representations and compare them with our proposed Sindhi word embeddings. The remaining sections of the paper are organized as; Section SECREF2 presents the literature survey regarding computational resources, Sindhi corpus construction, and word embedding models. Afterwards, Section SECREF3 presents the employed methodology, Section SECREF4 consist of statistical analysis of the developed corpus. Section SECREF5 present the experimental setup. The intrinsic evaluation results along with comparison are given in Section SECREF6. The discussion and future work are given in Section SECREF7, and lastly, Section SECREF8 presents the conclusion. Related work The natural language resources refer to a set of language data and descriptions BIBREF31 in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English BIBREF5, Stanford CoreNLP BIBREF6, LTP for Chinese BIBREF7, TectoMT for German, Russian, Arabic BIBREF8 and multilingual toolkit BIBREF9. But Sindhi language is at an early stage for the development of such resources and software tools. The corpus construction for NLP mainly involves important steps of acquisition, preprocessing, and tokenization. Initially, BIBREF14 discussed the morphological structure and challenges concerned with the corpus development along with orthographical and morphological features in the Persian-Arabic script. The raw and annotated corpus BIBREF1 for Sindhi Persian-Arabic is a good supplement towards the development of resources, including raw and annotated datasets for parts of speech tagging, morphological analysis, transliteration between Sindhi Persian-Arabic and Sindhi-Devanagari, and machine translation system. But the corpus is acquired only form Wikipedia-dumps. A survey-based study BIBREF4 provides all the progress made in the Sindhi Natural Language Processing (SNLP) with the complete gist of adopted techniques, developed tools and available resources which show that work on resource development on Sindhi needs more sophisticated efforts. The raw corpus is utilized for word segmentation BIBREF32 of Sindhi Persian-Arabic. More recently, an initiative towards the development of resources is taken BIBREF16 by open sourcing annotated dataset of Sindhi Persian-Arabic obtained from news and social blogs. The existing and proposed work is presented in Table TABREF9 on the corpus development, word segmentation, and word embeddings, respectively. The power of word embeddings in NLP was empirically estimated by proposing a neural language model BIBREF21 and multitask learning BIBREF12, but recently usage of word embeddings in deep neural algorithms has become integral element BIBREF33 for performance acceleration in deep NLP applications. The CBoW and SG BIBREF27 BIBREF20 popular word2vec neural architectures yielded high quality vector representations in lower computational cost with integration of character-level learning on large corpora in terms of semantic and syntactic word similarity later extended BIBREF33 BIBREF24. Both approaches produce state-of-the-art accuracy with fast training performance, better representations of less frequent words and efficient representation of phrases as well. BIBREF34 proposed NN based approach for generating morphemic-level word embeddings, which surpassed all the existing embedding models in intrinsic evaluation. A count-based GloVe model BIBREF26 also yielded state-of-the-art results in an intrinsic evaluation and downstream NLP tasks. The performance of Word embeddings is evaluated using intrinsic BIBREF23 BIBREF29 and extrinsic evaluation BIBREF28 methods. The performance of word embeddings can be measured with intrinsic and extrinsic evaluation approaches. The intrinsic approach is used to measure the internal quality of word embeddings such as querying nearest neighboring words and calculating the semantic or syntactic similarity between similar word pairs. A method of direct comparison for intrinsic evaluation of word embeddings measures the neighborhood of a query word in vector space. The key advantage of that method is to reduce bias and create insight to find data-driven relevance judgment. An extrinsic evaluation approach is used to evaluate the performance in downstream NLP tasks, such as parts-of-speech tagging or named-entity recognition BIBREF23, but the Sindhi language lacks annotated corpus for such type of evaluation. Moreover, extrinsic evaluation is time consuming and difficult to interpret. Therefore, we opt intrinsic evaluation method BIBREF28 to get a quick insight into the quality of proposed Sindhi word embeddings by measuring the cosine distance between similar words and using WordSim353 dataset. A study reveals that the choice of optimized hyper-parameters BIBREF35 has a great impact on the quality of pretrained word embeddings as compare to desing a novel algorithm. Therefore, we optimized the hyperparameters for generating robust Sindhi word embeddings using CBoW, SG and GloVe models. The embedding visualization is also useful to visualize the similarity of word clusters. Therefore, we use t-SNE BIBREF36 dimensionality reduction algorithm for compressing high dimensional embedding into 2-dimensional $x$,$y$ coordinate pairs with PCA BIBREF37. The PCA is useful to combine input features by dropping the least important features while retaining the most valuable features. Methodology This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings. Methodology ::: Task description We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization. Methodology ::: Corpus acquisition The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter. Methodology ::: Preprocessing The preprocessing of text corpus obtained from multiple web resources is a challenging task specially it becomes more complicated when working on low-resourced language like Sindhi due to the lack of open-source preprocessing tools such as NLTK BIBREF5 for English. Therefore, we design a preprocessing pipeline depicted in Figure FIGREF22 for the filtration of unwanted data and vocabulary of other languages such as English to prepare input for word embeddings. Whereas, the involved preprocessing steps are described in detail below the Figure FIGREF22. Moreover, we reveal the list of Sindhi stop words BIBREF38 which is labor intensive and requires human judgment as well. Hence, the most frequent and least important words are classified as stop words with the help of a Sindhi linguistic expert. The partial list of Sindhi stop words is given in TABREF61. We use python programming language for designing the preprocessing pipeline using regex and string functions. Input: The collected text documents were concatenated for the input in UTF-8 format. Replacement symbols: The punctuation marks of a full stop, hyphen, apostrophe, comma, quotation, and exclamation marks replaced with white space for authentic tokenization because without replacing these symbols with white space the words were found joined with their next or previous corresponding words. Filtration of noisy data: The text acquisition from web resources contain a huge amount of noisy data. Therefore, we filtered out unimportant data such as the rest of the punctuation marks, special characters, HTML tags, all types of numeric entities, email, and web addresses. Normalization: In this step, We tokenize the corpus then normalize to lower-case for the filtration of multiple white spaces, English vocabulary, and duplicate words. The stop words were only filtered out for preparing input for GloVe. However, the sub-sampling approach in CBoW and SG can discard most frequent or stop words automatically. Methodology ::: Word embedding models The NN based approaches have produced state-of-the-art performance in NLP with the usage of robust word embedings generated from the large unlabelled corpus. Therefore, word embeddings have become the main component for setting up new benchmarks in NLP using deep learning approaches. Most recently, the use cases of word embeddings are not only limited to boost statistical NLP applications but can also be used to develop language resources such as automatic construction of WordNet BIBREF39 using the unsupervised approach. The word embedding can be precisely defined as the encoding of vocabulary $V$ into $N$ and the word $w$ from $V$ to vector $\overrightarrow{w} $ into $N$-dimensional embedding space. They can be broadly categorized into predictive and count based methods, being generated by employing co-occurrence statistics, NN algorithms, and probabilistic models. The GloVe BIBREF26 algorithm treats each word as a single entity in the corpus and generates a vector of each word. However, CBoW and SG BIBREF27 BIBREF20, later extended BIBREF33 BIBREF24, well-known as word2vec rely on simple two layered NN architecture which uses linear activation function in hidden layer and softmax in the output layer. The work2vec model treats each word as a bag-of-character n-gram. Methodology ::: GloVe The GloVe is a log-bilinear regression model BIBREF26 which combines two methods of local context window and global matrix factorization for training word embeddings of a given vocabulary in an unsupervised way. It weights the contexts using the harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. The Glove’s implementation represents word $w \in V_{w}$ and context $c \in V_{c}$ in $D$-dimensional vectors $\overrightarrow{w}$ and $\overrightarrow{c}$ in a following way, Where, $b^{\overrightarrow{w}}$ is row vector $\left|V_{w}\right|$ and $b^{\overrightarrow{c}}$ is $\left|V_{c}\right|$ is column vector. Methodology ::: Continuous bag-of-words The standard CBoW is the inverse of SG BIBREF27 model, which predicts input word on behalf of the context. The length of input in the CBoW model depends on the setting of context window size which determines the distance to the left and right of the target word. Hence the context is a window that contain neighboring words such as by giving $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ a sequence of words $T$, the objective of the CBoW is to maximize the probability of given neighboring words such as, Where, $c_{t}$ is context of $t^{\text{th}}$ word for example with window $w_{t-c}, \ldots w_{t-1}, w_{t+1}, \ldots w_{t+c}$ of size $2 c$. Methodology ::: Skip gram The SG model predicts surrounding words by giving input word BIBREF20 with training objective of learning good word embeddings that efficiently predict the neighboring words. The goal of skip-gram is to maximize average log-probability of words $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ across the entire training corpus, Where, $c_{t}$ denotes the context of words indices set of nearby $w_{t}$ words in the training corpus. Methodology ::: Hyperparameters ::: Sub-sampling Th sub-sampling BIBREF20 approach is useful to dilute most frequent or stop words, also accelerates learning rate, and increases accuracy for learning rare word vectors. Numerous words in English, e.g., ‘the’, ‘you’, ’that’ do not have more importance, but these words appear very frequently in the text. However, considering all the words equally would also lead to over-fitting problem of model parameters BIBREF24 on the frequent word embeddings and under-fitting on the rest. Therefore, it is useful to count the imbalance between rare and repeated words. The sub-sampling technique randomly removes most frequent words with some threshold $t$ and probability $p$ of words and frequency $f$ of words in the corpus. Where each word$w_{i}$ is discarded with computed probability in training phase, $f(w_i )$ is frequency of word $w_{i}$ and $t>0$ are parameters. Methodology ::: Hyperparameters ::: Dynamic context window The traditional word embedding models usually use a fixed size of a context window. For instance, if the window size ws=6, then the target word apart from 6 tokens will be treated similarity as the next word. The scheme is used to assign more weight to closer words, as closer words are generally considered to be more important to the meaning of the target word. The CBoW, SG and GloVe models employ this weighting scheme. The GloVe model weights the contexts using a harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. However, CBoW and SG implementation equally consider the contexts by dividing the ws with the distance from target word, e.g. ws=6 will weigh its context by $\frac{6}{6} \frac{5}{6} \frac{4}{6} \frac{3}{6} \frac{2}{6} \frac{1}{6}$. Methodology ::: Hyperparameters ::: Sub-word model The sub-word model BIBREF24 can learn the internal structure of words by sharing the character representations across words. In that way, the vector for each word is made of the sum of those character $n-gram$. Such as, a vector of a word “table” is a sum of $n-gram$ vectors by setting the letter $n-gram$ size $min=3$ to $max=6$ as, $<ta, tab, tabl, table, table>, abl, able, able>, ble, ble>, le>$, we can get all sub-words of "table" with minimum length of $minn=3$ and maximum length of $maxn=6$. The $<$ and $>$ symbols are used to separate prefix and suffix words from other character sequences. In this way, the sub-word model utilizes the principles of morphology, which improves the quality of infrequent word representations. In addition to character $n-grams$, the input word $w$ is also included in the set of character $n-gram$, to learn the representation of each word. We obtain scoring function using a input dictionary of $n-grams$ with size $K$ by giving word $w$ , where $K_{w} \subset \lbrace 1, \ldots , K\rbrace $. A word representation $Z_{k}$ is associated to each $n-gram$ $Z$. Hence, each word is represented by the sum of character $n-gram$ representations, where, $s$ is the scoring function in the following equation, Methodology ::: Hyperparameters ::: Position-dependent weights The position-dependent weighting approach BIBREF40 is used to avoid direct encoding of representations for words and their positions which can lead to over-fitting problem. The approach learns positional representations in contextual word representations and used to reweight word embedding. Thus, it captures good contextual representations at lower computational cost, Where, $p$ is individual position in context window associated with $d_{p}$ vector. Afterwards the context vector reweighted by their positional vectors is average of context words. The relative positional set is $P$ in context window and $v_{C}$ is context vector of $w_{t}$ respectively. Methodology ::: Hyperparameters ::: Shifted point-wise mutual information The use sparse Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41 word-context matrix in learning word representations improves results on two word similarity tasks. The CBoW and SG have $k$ (number of negatives) BIBREF27 BIBREF20 hyperparameter, which affects the value that both models try to optimize for each $(w, c): P M I(w, c)-\log k$. Parameter $k$ has two functions of better estimation of negative examples, and it performs as before observing the probability of positive examples (actual occurrence of $w,c$). Methodology ::: Hyperparameters ::: Deleting rare words Before creating a context window, the automatic deletion of rare words also leads to performance gain in CBoW, SG and GloVe models, which further increases the actual size of context windows. Methodology ::: Evaluation methods The intrinsic evaluation is based on semantic similarity BIBREF23 in word embeddings. The word similarity measure approach states BIBREF35 that the words are similar if they appear in the similar context. We measure word similarity of proposed Sindhi word embeddings using dot product method and WordSim353. Methodology ::: Evaluation methods ::: Cosine similarity The cosine similarity between two non-zero vectors is a popular measure that calculates the cosine of the angle between them which can be derived by using the Euclidean dot product method. The dot product is a multiplication of each component from both vectors added together. The result of a dot product between two vectors isn’t another vector but a single value or a scalar. The dot product for two vectors can be defined as: $\overrightarrow{a}=\left(a_{1}, a_{2}, a_{3}, \dots , a_{n}\right)$ and $\overrightarrow{b}=\left({b}_{1}, {b}_{2}, {b}_{3}, \ldots , {b}_{n}\right)$ where $a_{n}$ and $b_{n}$ are the components of the vector and $n$ is dimension of vectors such as, However, the cosine of two non-zero vectors can be derived by using the Euclidean dot product formula, Given $a_{i}$ two vectors of attributes $a$ and $b$, the cosine similarity, $\cos ({\theta })$, is represented using a dot product and magnitude as, where $a_{i}$ and $b_{i}$ are components of vector $\overrightarrow{a}$ and $\overrightarrow{b}$, respectively. Methodology ::: Evaluation methods ::: WordSim353 The WordSim353 BIBREF42 is popular for the evaluation of lexical similarity and relatedness. The similarity score is assigned with 13 to 16 human subjects with semantic relations BIBREF30 for 353 English noun pairs. Due to the lack of annotated datasets in the Sindhi language, we translated WordSim353 using English to Sindhi bilingual dictionary for the evaluation of our proposed Sindhi word embeddings and SdfastText. We use the Spearman correlation coefficient for the semantic and syntactic similarity comparison which is used to used to discover the strength of linear or nonlinear relationships if there are no repeated data values. A perfect Spearman’s correlation of $+1$ or $-1$ discovers the strength of a link between two sets of data (word-pairs) when observations are monotonically increasing or decreasing functions of each other in a following way, where $r_s$ is the rank correlation coefficient, $n$ denote the number of observations, and $d^i$ is the rank difference between $i^{th}$ observations. Statistical analysis of corpus The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens. Statistical analysis of corpus ::: Letter occurrences The frequency of letter occurrences in human language is not arbitrarily organized but follow some specific rules which enable us to describe some linguistic regularities. The Zipf’s law BIBREF43 suggests that if the frequency of letter or word occurrence ranked in descending order such as, Where, $F_{r}$ is the letter frequency of rth rank, $a$ and $b$ are parameters of input text. The comparative letter frequency in the corpus is the total number of occurrences of a letter divided by the total number of letters present in the corpus. The letter frequencies in our developed corpus are depicted in Figure FIGREF55; however, the corpus contains 187,620,276 total number of the character set. Sindhi Persian-Arabic alphabet consists of 52 letters but in the vocabulary 59 letters are detected, additional seven letters are modified uni-grams and standalone honorific symbols. Statistical analysis of corpus ::: Letter n-grams frequency We denote the combination of letter occurrences in a word as n-grams, where each letter is a gram in a word. The letter n-gram frequency is carefully analyzed in order to find the length of words which is essential to develop NLP systems, including learning of word embeddings such as choosing the minimum or maximum length of sub-word for character-level representation learning BIBREF24. We calculate the letter n-grams in words along with their percentage in the developed corpus (see Table TABREF57). The bi-gram words are most frequent, mostly consists of stop words and secondly, 4-gram words have a higher frequency. Statistical analysis of corpus ::: Word Frequencies The word frequency count is an observation of word occurrences in the text. The commonly used words are considered to be with higher frequency, such as the word “the" in English. Similarly, the frequency of rarely used words to be lower. Such frequencies can be calculated at character or word-level. We calculate word frequencies by counting a word $w$ occurrence in the corpus $c$, such as, Where the frequency of $w$ is the sum of every occurrence $k$ of $w$ in $c$. Statistical analysis of corpus ::: Stop words The most frequent and least important words in NLP are often classified as stop words. The removal of such words can boost the performance of the NLP model BIBREF38, such as sentiment analysis and text classification. But the construction of such words list is time consuming and requires user decisions. Firstly, we determined Sindhi stop words by counting their term frequencies using Eq. DISPLAY_FORM59, and secondly, by analysing their grammatical status with the help of Sindhi linguistic expert because all the frequent words are not stop words (see Figure FIGREF62). After determining the importance of such words with the help of human judgment, we placed them in the list of stop words. The total number of detected stop words is 340 in our developed corpus. The partial list of most frequent Sindhi stop words is depicted in Table TABREF61 along with their frequency. The filtration of stop words is an essential preprocessing step for learning GloVe BIBREF26 word embeddings; therefore, we filtered out stop words for preparing input for the GloVe model. However, the sub-sampling approach BIBREF33 BIBREF24 is used to discard such most frequent words in CBoW and SG models. Experiments and results Hyperparameter optimization BIBREF23is more important than designing a novel algorithm. We carefully choose to optimize the dictionary and algorithm-based parameters of CBoW, SG and GloVe algorithms. Hence, we conducted a large number of experiments for training and evaluation until the optimization of most suitable hyperparameters depicted in Table TABREF64 and discussed in Section SECREF63. The choice of optimized hyperparameters is based on The high cosine similarity score in retrieving nearest neighboring words, the semantic, syntactic similarity between word pairs, WordSim353, and visualization of the distance between twenty nearest neighbours using t-SNE respectively. All the experiments are conducted on GTX 1080-TITAN GPU. Experiments and results ::: Hyperparameter optimization The state-of-the-art SG, CBoW BIBREF27 BIBREF33 BIBREF20 BIBREF24 and Glove BIBREF26 word embedding algorithms are evaluated by parameter tuning for development of Sindhi word embeddings. These parameters can be categories into dictionary and algorithm based, respectively. The integration of character n-gram in learning word representations is an ideal method especially for rich morphological languages because this approach has the ability to compute rare and misspelled words. Sindhi is also a rich morphological language. Therefore more robust embeddings became possible to train with the hyperparameter optimization of SG, CBoW and GloVe algorithms. We tuned and evaluated the hyperparameters of three algorithms individually which are discussed as follows: Number of Epochs: Generally, more epochs on the corpus often produce better results but more epochs take long training time. Therefore, we evaluate 10, 20, 30 and 40 epochs for each word embedding model, and 40 epochs constantly produce good results. Learning rate (lr): We tried lr of $0.05$, $0.1$, and $0.25$, the optimal lr $(0.25)$ gives the better results for training all the embedding models. Dimensions ($D$): We evaluate and compare the quality of $100-D$, $200-D$, and $300-D$ using WordSim353 on different $ws$, and the optimal $300-D$ are evaluated with cosine similarity matrix for querying nearest neighboring words and calculating the similarity between word pairs. The embedding dimensions have little affect on the quality of the intrinsic evaluation process. However, the selection of embedding dimensions might have more impact on the accuracy in certain downstream NLP applications. The lower embedding dimensions are faster to train and evaluate. Character n-grams: The selection of minimum (minn) and the maximum (maxn) length of character $n-grams$ is an important parameter for learning character-level representations of words in CBoW and SG models. Therefore, the n-grams from $3-9$ were tested to analyse the impact on the accuracy of embedding. We optimized the length of character n-grams from $minn=2$ and $maxn=7$ by keeping in view the word frequencies depicted in Table TABREF57. Window size (ws): The large ws means considering more context words and similarly less ws means to limit the size of context words. By changing the size of the dynamic context window, we tried the ws of 3, 5, 7 the optimal ws=7 yield consistently better performance. Negative Sampling (NS): : The more negative examples yield better results, but more negatives take long training time. We tried 10, 20, and 30 negative examples for CBoW and SG. The best negative examples of 20 for CBoW and SG significantly yield better performance in average training time. Minimum word count (minw): We evaluated the range of minimum word counts from 1 to 8 and analyzed that the size of input vocabulary is decreasing at a large scale by ignoring more words similarly the vocabulary size was increasing by considering rare words. Therefore, by ignoring words with a frequency of less than 4 in CBoW, SG, and GloVe consistently yields better results with the vocabulary of 200,000 words. Loss function (ls): we use hierarchical softmax (hs) for CBoW, negative sampling (ns) for SG and default loss function for GloVe BIBREF26. The recommended verbosity level, number of buckets, sampling threshold, number of threads are used for training CBoW, SG BIBREF24, and GloVe BIBREF26. Word similarity comparison of Word Embeddings ::: Nearest neighboring words The cosine similarity matrix BIBREF35 is a popular approach to compute the relationship between all embedding dimensions of their distinct relevance to query word. The words with similar context get high cosine similarity and geometrical relatedness to Euclidean distance, which is a common and primary method to measure the distance between a set of words and nearest neighbors. Each word contains the most similar top eight nearest neighboring words determined by the highest cosine similarity score using Eq. DISPLAY_FORM48. We present the English translation of both query and retrieved words also discuss with their English meaning for ease of relevance judgment between the query and retrieved words.To take a closer look at the semantic and syntactic relationship captured in the proposed word embeddings, Table TABREF74 shows the top eight nearest neighboring words of five different query words Friday, Spring, Cricket, Red, Scientist taken from the vocabulary. As the first query word Friday returns the names of days Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday in an unordered sequence. The SdfastText returns five names of days Sunday, Thursday, Monday, Tuesday and Wednesday respectively. The GloVe model also returns five names of days. However, CBoW and SG gave six names of days except Wednesday along with different writing forms of query word Friday being written in the Sindhi language which shows that CBoW and SG return more relevant words as compare to SdfastText and GloVe. The CBoW returned Add and GloVe returns Honorary words which are little similar to the querry word but SdfastText resulted two irrelevant words Kameeso (N) which is a name (N) of person in Sindhi and Phrase is a combination of three Sindhi words which are not tokenized properly. Similarly, nearest neighbors of second query word Spring are retrieved accurately as names and seasons and semantically related to query word Spring by CBoW, SG and Glove but SdfastText returned four irrelevant words of Dilbahar (N), Pharase, Ashbahar (N) and Farzana (N) out of eight. The third query word is Cricket, the name of a popular game. The first retrieved word in CBoW is Kabadi (N) that is a popular national game in Pakistan. Including Kabadi (N) all the returned words by CBoW, SG and GloVe are related to Cricket game or names of other games. But the first word in SdfastText contains a punctuation mark in retrieved word Gone.Cricket that are two words joined with a punctuation mark (.), which shows the tokenization error in preprocessing step, sixth retrieved word Misspelled is a combination of three words not related to query word, and Played, Being played are also irrelevant and stop words. Moreover, fourth query word Red gave results that contain names of closely related to query word and different forms of query word written in the Sindhi language. The last returned word Unknown by SdfastText is irrelevant and not found in the Sindhi dictionary for translation. The last query word Scientist also contains semantically related words by CBoW, SG, and GloVe, but the first Urdu word given by SdfasText belongs to the Urdu language which means that the vocabulary may also contain words of other languages. Another unknown word returned by SdfastText does not have any meaning in the Sindhi dictionary. More interesting observations in the presented results are the diacritized words retrieved from our proposed word embeddings and The authentic tokenization in the preprocessing step presented in Figure FIGREF22. However, SdfastText has returned tri-gram words of Phrase in query words Friday, Spring, a Misspelled word in Cricket and Scientist query words. Hence, the overall performance of our proposed SG, CBoW, and GloVe demonstrate high semantic relatedness in retrieving the top eight nearest neighbor words. Word similarity comparison of Word Embeddings ::: Word pair relationship Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings. Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able. Word similarity comparison of Word Embeddings ::: Comparison with WordSim353 We evaluate the performance of our proposed word embeddings using the WordSim353 dataset by translation English word pairs to Sindhi. Due to vocabulary differences between English and Sindhi, we were unable to find the authentic meaning of six terms, so we left these terms untranslated. So our final Sindhi WordSim353 consists of 347 word pairs. Table TABREF80 shows the Spearman correlation results using Eq. DISPLAY_FORM51 on different dimensional embeddings on the translated WordSim353. The Table TABREF80 presents complete results with the different ws for CBoW, SG and GloVe in which the ws=7 subsequently yield better performance than ws of 3 and 5, respectively. The SG model outperforms CBoW and GloVe in semantic and syntactic similarity by achieving the performance of 0.629 with ws=7. In comparison with English BIBREF27 achieved the average semantic and syntactic similarity of 0.637, 0.656 with CBoW and SG, respectively. Therefore, despite the challenges in translation from English to Sindhi, our proposed Sindhi word embeddings have efficiently captured the semantic and syntactic relationship. Word similarity comparison of Word Embeddings ::: Visualization We use t-Distributed Stochastic Neighboring (t-SNE) dimensionality BIBREF36 reduction algorithm with PCA BIBREF37 for exploratory embeddings analysis in 2-dimensional map. The t-SNE is a non-linear dimensionality reduction algorithm for visualization of high dimensional datasets. It starts the probability calculation of similar word clusters in high-dimensional space and calculates the probability of similar points in the corresponding low-dimensional space. The purpose of t-SNE for visualization of word embeddings is to keep similar words close together in 2-dimensional $x,y$ coordinate pairs while maximizing the distance between dissimilar words. The t-SNE has a perplexity (PPL) tunable parameter used to balance the data points at both the local and global levels. We visualize the embeddings using PPL=20 on 5000-iterations of 300-D models. We use the same query words (see Table TABREF74) by retrieving the top 20 nearest neighboring word clusters for a better understanding of the distance between similar words. Every query word has a distinct color for the clear visualization of a similar group of words. The closer word clusters show the high similarity between the query and retrieved word clusters. The word clusters in SG (see Fig. FIGREF83) are closer to their group of semantically related words. Secondly, the CBoW model depicted in Fig. FIGREF82 and GloVe Fig. FIGREF84 also show the better cluster formation of words than SdfastText Fig. FIGREF85, respectively. Discussion and future work In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet. Conclusion In this paper, we mainly present three novel contributions of large corpus development contains large vocabulary of more than 61 million tokens, 908,456 unique words. Secondly, the list of Sindhi stop words is constructed by finding their high frequency and least importance with the help of Sindhi linguistic expert. Thirdly, the unsupervised Sindhi word embeddings are generated using state-of-the-art CBoW, SG and GloVe algorithms and evaluated using popular intrinsic evaluation approaches of cosine similarity matrix and WordSim353 for the first time in Sindhi language processing. We translate English WordSim353 using the English-Sindhi bilingual dictionary, which will also be a good resource for the evaluation of Sindhi word embeddings. Moreover, the proposed word embeddings are also compared with recently revealed SdfastText word representations. Our empirical results demonstrate that our proposed Sindhi word embeddings have captured high semantic relatedness in nearest neighboring words, word pair relationship, country, and capital and WordSim353. The SG yields the best performance than CBoW and GloVe models subsequently. However, the performance of GloVe is low on the same vocabulary because of character-level learning of word representations and sub-sampling approaches in SG and CBoW. Our proposed Sindhi word embeddings have surpassed SdfastText in the intrinsic evaluation matrix. Also, the vocabulary of SdfastText is limited because they are trained on a small Wikipedia corpus of Sindhi Persian-Arabic. We will further investigate the extrinsic performance of proposed word embeddings on the Sindhi text classification task in the future. The proposed resources along with systematic evaluation will be a sophisticated addition to the computational resources for statistical Sindhi language processing.
daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary website, novels, history and religious books from Sindhi Adabi Board, tweets regarding news and sports are collected from twitter
75043c17a2cddfce6578c3c0e18d4b7cf2f18933
75043c17a2cddfce6578c3c0e18d4b7cf2f18933_0
Q: What trends are found in musical preferences? Text: Motivation, Background and Related Work Until recent times, the research in popular music was mostly bound to a non-computational approach BIBREF0 but the availability of new data, models and algorithms helped the rise of new research trends. Computational analysis of music structure BIBREF1 is focused on parsing and annotate patters in music files; computational music generation BIBREF2 trains systems able to generate songs with specific music styles; computational sociology of music analyzes databases annotated with metadata such as tempo, key, BPMs and similar (generally referred to as sonic features); even psychology of music use data to find new models. Recent papers in computational sociology investigated novelty in popular music, finding that artists who are highly culturally and geographically connected are more likely to create novel songs, especially when they span multiple genres, are women, or are in the early stages of their careers BIBREF3. Using the position in Billboard charts and the sonic features of more than 20K songs, it has been demonstrated that the songs exhibiting some degree of optimal differentiation in novelty are more likely to rise to the top of the charts BIBREF4. These findings offer very interesting perspectives on how popular culture impacts the competition of novel genres in cultural markets. Another problem addressed in this research field is the distinction between what is popular and what is significative to a musical context BIBREF5. Using a user-generated set of tags collected through an online music platform, it has been possible to compute a set of metrics, such as novelty, burst or duration, from a co-occurrence tag network relative to music albums, in order to find the tags that propagate more and the albums having a significative impact. Combining sonic features and topic extraction techniques from approximately 17K tracks, scholars demonstrate quantitative trends in harmonic and timbral properties that brought changes in music sound around 1964, 1983 and 1991 BIBREF6. Beside these research fields, there is a trend in the psychology of music that studies how the musical preferences are reflected in the dimensions of personality BIBREF7. From this kind of research emerged the MUSIC model BIBREF8, which found that genre preferences can be decomposed into five factors: Mellow (relaxed, slow, and romantic), Unpretentious, (easy, soft, well-known), Sophisticated (complex, intelligent or avant-garde), Intense (loud, aggressive, and tense) and Contemporary (catchy, rhythmic or danceable). Is it possible to find trends in the characteristics of the genres? And is it possible to predict the characteristics of future genres? To answer these questions, we produced a hand-crafted dataset with the intent to put together MUSIC, style and sonic features, annotated by music genre and indexed by time and decade. To do so, we collected a list of popular music genres by decade from Wikipedia and instructed annotators to score them. The paper is structured as follows: In section SECREF2 we provide a brief history of popular music, in section SECREF3 we describe the dataset and in section SECREF4 we provide the results of the experiments. In the end we draw some conclusions. Brief introduction to popular music We define ”popular music” as the music which finds appeal out of culturally closed music groups, also thanks to its commercial nature. Non-popular music can be divided into three broad groups: classical music (produced and performed by experts with a specific education), folk/world music (produced and performed by traditional cultures), and utility music (such as hymns and military marches, not primarily intended for commercial purposes). Popular music is a great mean for spreading culture, and a perfect ground where cultural practices and industry processes combine. In particular the cultural processes select novelties, broadly represented by means of underground music genres, and the industry tries to monetize, making them commercially successful. In the following description we include almost all the genres that reach commercial success and few of the underground genres that are related to them. Arguably the beginning of popular music is in the USA between 1880s and 1890s with spirituals, work and shout chants BIBREF9, that we classify half-way between world music and popular music. The first real popular music genres in the 1900s were ragtime, pioneer of piano blues and jazz, and gospel, derived from religious chants of afro-american communities and pioneer of soul and RnB. The 1910s saw the birth of tin pan alley (simple pop songs for piano composed by professionals) and dixieland jazz, a spontaneous melting pot of ragtime, classical, afroamerican and haitian music BIBREF10. In the 1920s, blues and hillbilly country became popular. The former was born as a form of expression of black communities and outcasts, while the latter was a form of entertainment of the white rural communities. Tin pan alley piano composers soon commercialized tracks in the style of blues, generating boogie-woogie as a reaction, an underground and very aggressive piano blues played by black musicians. In Chicago and New York jazz became more sophisticated and spread to Europe, where gipsy jazz became popular soon after. Both in US and Europe, the 1930s were dominated by swing, the most popular form of jazz, which was at the same time danceable, melanchonic, catchy and intelligent. In the US the west swing, a mellow and easy type of country music, became popular thanks to western movies. The 1940s in the US saw a revival of dixieland jazz, the rise of be-bop (one of the most mellow and intelligent forms of jazz), the advent of crooners (male pop singers) and the establishment of back-to-the-roots types of country music such as bluegrass, a reaction against west swing, modernity and electric guitars. In the underground there was honky-tonk, a sad kind of country music that will influence folk rock. In the 1950s rock and roll was created by black communities with the electric fusion of blues, boogie-woogie and hillbilly and soon commercialized for large white audiences. Beside this, many things happened: urban blues forged its modern sound using electric guitars and harmonicas; cool jazz, played also by white people, launched a more commercial and clean style; gospel influenced both doo-wop, (a-cappella music performed by groups of black singers imitating crooners) and RnB, where black female singers played with a jazz or blues band. The 1960s saw an explosion of genres: countrypolitan, an electric and easy form of country music, became the most commercialized genre in the US; the first independent labels (in particular the Motown) turned doo-wop into well-arranged and hyper-produced soul music with a good commercial success BIBREF11; ska, a form of dance music with a very typical offbeat, became popular outside of Jamaica; garage (and also surf) rock arose as the first forms of independent commercial rock music, sometimes aggressive and sometimes easy; in the UK, beat popularized a new style of hyper-produced rock music that had a very big commercial success; blues rock emerged as the mix of the two genres; teenypop was created in order to sell records to younger audiences; independent movements like beat generation and hippies helped the rise of folk rock and psychedelic rock respectively BIBREF12; funk emerged from soul and jazz (while jazz turned into the extremely complex free jazz as a reaction against the commercial cool jazz, but remained underground). In the 1970s progressive rock turned psychedelia into a more complex form, independent radios contribute to its diffusion as well as the popularity of songwriters, an evolution of folk singers that proliferated from latin america (nueva canción) to western Europe. In the meanwhile, TV became a new channel for music marketing , exploited by glam rock, that emerged as a form of pop rock music with a fake trasgressive image and eclectic arrangements; fusion jazz begun to include funk and psychedelic elements; the disillusion due to the end of hippie movement left angry and frustrated masses listening to hard rock and blues rock, that included anti-religious symbols and merged into heavy metal. Then garage and independent rock, fueled by anger and frustration, was commercialized as punk rock at the end of the decade, while disco music (a catchy and hyper-danceable version of soul and RnB) was played in famous clubs and linked to sex and fun, gathering the LGBT communities. The poorest black communities, kept out from the disco clubs, begun to perform in house-parties, giving rise to old skool rap, whose sampled sounds and rhythmic vocals were a great novelty but remained underground. The real novelties popularized in this decade were ambient (a very intelligent commercial downtempo music derived from classical music), reggae (which mixed ska, rock and folk and from Jamaica conquered the UK) and above all synth electronica, a type of industrial experimental music that became popular for its new sound and style, bridging the gap between rock and electronic music. This will deeply change the sound of the following decades BIBREF13. The 1980s begun with the rise of synth pop and new wave. The former, also referred to as ”new romantics”, was a popular music that mixed catchy rhythms with simple melodies and synthetic sounds while the latter was an hipster mix of glam rock and post-punk with a positive view (as opposed to the depressive mood of the real post-punk), with minor influences from synth electronica and reggae. The music industry created also glam metal for the heavy metal audiences, that reacted with extreme forms like thrash metal; a similar story happened with punk audiences, that soon moved to extreme forms like hardcore, which remained underground but highlighted a serious tensions between industry and the audiences that wanted spontaneous genres BIBREF14. In the meanwhile discopop produced a very catchy, easy and danceable music mix of disco, funk and synthetic sounds, that greatly improved the quality of records, yielding to one of the best selling genres in the whole popular music history. In a similar way smooth jazz (a mix of mellow and easy melodies with synthetic rhythmical bases) and soft adult (a mellow and easy form of pop) obtained a good commercial success. Techno music emerged as a new form of danceable synthetic and funky genre and hard rap became popular both in black and white audiences, while electro (break dance at the time) and (pioneering) house music remained underground for their too much innovative sampled sounds. In the 1990s alternative/grunge rock solved the tension between commercial and spontaneous genres with a style of rock that was at the same time aggressive, intelligent and easy to listen to. The same happened with skatepunk (a fast, happy and commercial form of rock) and rap metal (a mix of the two genres) while britpop continued the tradition of pop rock initiated with beat. RnB evolved into new jack swing (a form of softer, rhythmical and easy funk) and techno split into the commercial eurodance (a mix of techno and disco music with synthetic sounds, manipulated RnB vocals and strong beats) and the subculture of rave (an extremely aggressive form of techno played in secret parties and later in clubs), which helped the creation of goa trance, that new hippie communities used for accompany drug trips BIBREF15. An intelligent and slow mix of electro and RnB became popular as trip hop while an aggressive and extremely fast form of electro with reggae influences became popular as jungle/DnB. By the end of the decade the most commercially successful genres were dancepop (a form of pop that included elements of funk, disco and eurodance in a sexy image) and gangsta rap/hip hop that reached its stereotypical form and became mainstream, while independent labels (that produced many subgenres from shoegaze/indie rock to electro and house) remained in the underground. In the underground -but in latin america- there was also reggaetón, a latin form of rap. The rise of free download and later social networks websites in 2000s opened new channels for independent genres, that allowed the rise of grime (a type of electro mixing DnB and rap), dubstep (a very intelligent and slow mix of techno, DnB and electro low-fi samples), indietronica (a broad genre mixing intelligent indie rock, electro and a lot of minor influences) and later nu disco (a revival of stylish funk and disco updated with electro and house sounds) BIBREF16. In the meanwhile there were popular commercial genres like garage rock revival (that updated rock and punk with danceable beats), emo rock/post grunge (aggressive, easy and even more catchy), urban breaks (a form of RnB with heavy electro and rap influences) and above all electropop (the evolution of dancepop, that included elements of electro/house and consolidated the image of seductive female singers, also aimed at the youngest audiences of teens). Among those genres epic trance (an euphoric, aggressive and easy form of melodic techno) emerged from the biggest dedicated festivals and became mainstream with over-payed DJ-superstars BIBREF17. In the underground remained various forms of nu jazz, hardcore techno, metal and house music. Then in 2010s finally euro EDM house music (a form of sample-based and heavily danceable mix of house and electro) came out of underground communities and, borrowing the figure of DJ-superstar from trance, reached commercial success, but left underground communities unsatisfied (they were mostly producing complex electro, a mix of dubstep and avant-garde house). Also drumstep (a faster and aggressive version of dubstep, influenced by EDM and techno) and trap music (a form of dark and heavy techno rap) emerged from underground and had good commercial success. Genres like indiefolk (a modern and eclectic folk rock with country influences) and nu prog rock (another eclectic, experimental and aggressive form of rock with many influences from electro, metal and rap) had moderate success. The availability of websites for user-generated contents such as Youtube helped to popularize genres like electro reggaetón (latin rap with new influences from reggae and electro), cloud rap (an eclectic and intelligent form of rap with electro influences) and JK-pop (a broad label that stands for Japanese and Korean pop, but emerged from all over the world with common features: Youtubers that produce easy and catchy pop music with heavy influences from electropop, discopop and eurodance) BIBREF18. Moreover, technologies helped the creation of mainstream genres such as tropical house (a very melodic, soft and easy form of house music singed in an modern RnB style). In the underground there are yet many minor genres, such as bro country (an easy form of country played by young and attractive guys and influenced by electro and rap), future hardstyle (a form of aggressive trance with easy vocals similar to tropical house) and afrobeat (a form of rap that is popular in western africa with influences from reggaetón and traditional african music). From this description we can highlight some general and recurrent tendencies, for example the fact that music industry converts spontaneous novelties into commercial success, but when its products leave audiences frustrated (it happened with west swing, glam metal, cool jazz, punk and many others), they generate reactions in underground cultures, that trigger a change into more aggressive versions of the genre. In general, underground and spontaneous genres are more complex and avant-garde. Another pattern is that media allowed more and more local underground genres to influence the mainstream ones, ending in a combinatorial explosion of possible new genres, most of which remain underground. We suggest that we need to quantify a set of cross-genre characteristics in order to compute with data science techniques some weaker but possibly significative patterns that cannot be observed with qualitative methods. In the next section we define a quantitative methodology and we annotate a dataset to perform experiments. Data Description From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1. From a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions: genre features: genre scale (a score between 0 and 1 where 0=downtempo/industrial, 0.1=metal, 0.15=garage/punk/hardcore, 0.2=rock, 0.25=pop rock, 0.3=blues, 0.4=country, 0.5=pop/traditional, 0.55=gospel, 0.6=jazz, 0.65=latin, 0.7=RnB/soul/funk, 0.75=reggae/jamaican, 0.8=rap, 0.85=DnB, 0.9=electro/house, 0.95=EDM, 1=techno/trance) and category of the super-genre (as defined in figure FIGREF1) and influence variety 0.1=influence only from the same super-genre, 1=influences from all the supergenres perceived acoustic features: sound (0=acoustic, 0.35=amplified, 0.65=sampled/manipulated, 1=synthetic), vocal melody (1=melodic vocals, 0=rhythmical vocals/spoken words), vocal scream (1=screaming, 0=soft singing), vocal emotional (1=emotional vocals, 0=monotone vocals), virtuous (0.5=normal, 0=not technical at all, 1=very technical); richbass 1=the bass is loud and clear, 0=there is no bass sound; offbeat 1=the genre has a strong offbeat, 0=the genre has not offbeat time: decade (classes between 1900s and 2010s) and year representative of the time when the genre became meainstream place features: origin place 0=Australia, 0.025=west USA, 0.05=south USA, 0.075=north/east USA, 0.1=UK, 0.2=jamaica, 0.3=carribean, 0.4=latin america, 0.5=africa, 0.6=south EU, 0.65=north/east EU, 0.7=middle east, 0.8=India, 0.9=China/south asia, 1=Korea/north asia; place urban (0=the origin place is rural, 1=the origin place is urban), place poor (0=the origin place is poor, 1=the origin place is rich) media features: media mainstream (0=independent media, 1=mainstream media, 0.5=both), media live 0=sell recorded music, 1=sell live performance) emotion features: joy/sad (1=joy, 0=sad), anticipation/surprise (1=anticipation or already known, 0=surprise), anger/calm (1=anger, 0=calm). style features: novelty 0=derivative, 0.5=normal, 1=totally new characteristics and type retro 1=the genre is a revival, 0.5=normal, 0=the genre is not a revival, lyrics love/explicit 0.5=normal, 1=love lyrics, 0=explicit lyrics, style upbeat 1=extroverted and danceable, 0=introverted and depressive, style instrumental 1=totally instrumental, 0=totally singed, style eclecticism 1=includes many styles, 0=has a stereotypical style, style longsongs 0.5=radio format (3.30 minutes), 1=more than 6 minutes by average, 0=less than 1 minute by average; largebands 1=bands of 10 or more people, 0.1=just one musician; subculture 1=the audience one subculture or more, 0=the audience is the main culture; hedonism 1=the genre promotes hedonism, 0=the genre does not promote hedonism; protest 1=the genre promotes protest, 0=the genere does not promote protest; onlyblack 1=genere produced only by black communities, 0=genre produced only by white communities; ; 44beat 1=the genre has 4/4 beat, 0=the genre has other types of measures; outcasts 1=the audience is poor people, 0=the audience is rich people; dancing 1=the genre is for dancing, 0=the genre is for home listening; drugs 1=the audience use drugs, 0=the audience do not use drugs MUSIC features: mellow (1=slow and romantic, 0=fast and furious), sophisticated (1=culturally complex, 0=easy to understand), intense (1=aggressive and loud, 0=soft and relaxing), contemporary (1=rhythmical and catchy, 0=not rhythmical and old-fashioned), uncomplicated (1=simple and well-known, 0=strange and disgustive) We computed the agreement between the two annotators using Cronbach's alpha statistics BIBREF21. The average between all features is $\alpha =0.793$, which is good. Among the most agreed features there are genre, place, sound and MUSIC features. In particular, the genre scale got an excellent $\alpha =0.957$, meaning that the genre scale is a reliable measure. In the final annotation all the divergences between the two annotators were agreed upon and the scores were averaged or corrected accordingly. The final dataset is available to the scientific community. Experiments What are the tendencies that confirm or disconfirm previous findings? We noticed very interesting remarks just from the distributions of the features, reported in figure FIGREF11. We can see that most of the popular music genres have a novelty score between 0.5 and 0.65, which is medium-high. This confirms the findings of previous work about the optimal level of innovation and acceptance. It is interesting to note that almost all the popular genres come from an urban context, where the connections between communities are more likely to create innovations. Moreover, we can see that the distribution of mainstream media is bi-modal: this means that an important percentage of genres are popularized by means of underground or new media. This happened many times in music history, from the the free radios to the web of the user-generated content. Crucially, popular music genres strongly tend to be perceived as technically virtuous. Why the sound changed from acoustic to synthetic during the last century? To answer this question we used a correlation analysis with the sound feature as target. It emerged that the change towards sampled and synthetic sound is correlated to dancing, to intensity/aggressiveness, to a larger drug usage and to a large variety of infleunces, while it is negatively correlated to large bands and mellow tones. In summary a more synthetic sound allowed a more intense and danceable music, reducing the number of musicians (in other words reducing costs for the industry). How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases. Is it possible to predict future genres by means of the genre scale? To answer this question we used time series forecasting. In particular, we exploited all the features in the years from 1900 to 2010 to train a predictive model of the scores from 2011 to 2018. As the year of the genre label is arbitrary, predicted scores and labels can be not aligned, thus MAE or RSME are not suitable evaluation metrics. As evaluation metric we defined average accuracy as $a=\frac{\sum count(|l-h|<0.1)}{count(t)} $, where the label (l) and the prediction (h) can be anywhere within the year serie (t). Table TABREF13, shows the results of the prediction of genre scale for the years 2011 to 2018 with different algorithms: linear regression (LR), Support Vector Machine (SVM), multi layer perceptron (MPL), nearest neighbors (IBk), and a meta classifier (stacking) with SVM+MLP+IBk. The results reveal that the forecasting of music genres is a non-linear problem, that IBk predicts the closest sequence to the annotated one and that a meta classifier with nearest neighborsBIBREF22 is the most accurate in the prediction. Deep Learning algorithms does not perform well in this case because the dataset is not large enough. Last remark: feature reduction (from 41 to 14) does not affect the results obtained with IBk and meta classifiers, indicating that there is no curse of dimensionality. Conclusion Acknowledgments and Future We annotated and presented a new dataset for the computational analysis of popular music. Our preliminary studies confirm previous findings (there is an optimal level of novelty to become popular and this is more likely to happen in urban contexts) and reveal that audiences tend to like contemporary and intense music experiences. We also performed a back test for the prediction of future music genres in a time series, that turned out to be a non-linear problem. For the future we would like to update the corpus with more features about audience types and commercial success. This work has also been inspired by Music Map.
audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious
95bb3ea4ebc3f2174846e8d422abc076e1407d6a
95bb3ea4ebc3f2174846e8d422abc076e1407d6a_0
Q: Which decades did they look at? Text: Motivation, Background and Related Work Until recent times, the research in popular music was mostly bound to a non-computational approach BIBREF0 but the availability of new data, models and algorithms helped the rise of new research trends. Computational analysis of music structure BIBREF1 is focused on parsing and annotate patters in music files; computational music generation BIBREF2 trains systems able to generate songs with specific music styles; computational sociology of music analyzes databases annotated with metadata such as tempo, key, BPMs and similar (generally referred to as sonic features); even psychology of music use data to find new models. Recent papers in computational sociology investigated novelty in popular music, finding that artists who are highly culturally and geographically connected are more likely to create novel songs, especially when they span multiple genres, are women, or are in the early stages of their careers BIBREF3. Using the position in Billboard charts and the sonic features of more than 20K songs, it has been demonstrated that the songs exhibiting some degree of optimal differentiation in novelty are more likely to rise to the top of the charts BIBREF4. These findings offer very interesting perspectives on how popular culture impacts the competition of novel genres in cultural markets. Another problem addressed in this research field is the distinction between what is popular and what is significative to a musical context BIBREF5. Using a user-generated set of tags collected through an online music platform, it has been possible to compute a set of metrics, such as novelty, burst or duration, from a co-occurrence tag network relative to music albums, in order to find the tags that propagate more and the albums having a significative impact. Combining sonic features and topic extraction techniques from approximately 17K tracks, scholars demonstrate quantitative trends in harmonic and timbral properties that brought changes in music sound around 1964, 1983 and 1991 BIBREF6. Beside these research fields, there is a trend in the psychology of music that studies how the musical preferences are reflected in the dimensions of personality BIBREF7. From this kind of research emerged the MUSIC model BIBREF8, which found that genre preferences can be decomposed into five factors: Mellow (relaxed, slow, and romantic), Unpretentious, (easy, soft, well-known), Sophisticated (complex, intelligent or avant-garde), Intense (loud, aggressive, and tense) and Contemporary (catchy, rhythmic or danceable). Is it possible to find trends in the characteristics of the genres? And is it possible to predict the characteristics of future genres? To answer these questions, we produced a hand-crafted dataset with the intent to put together MUSIC, style and sonic features, annotated by music genre and indexed by time and decade. To do so, we collected a list of popular music genres by decade from Wikipedia and instructed annotators to score them. The paper is structured as follows: In section SECREF2 we provide a brief history of popular music, in section SECREF3 we describe the dataset and in section SECREF4 we provide the results of the experiments. In the end we draw some conclusions. Brief introduction to popular music We define ”popular music” as the music which finds appeal out of culturally closed music groups, also thanks to its commercial nature. Non-popular music can be divided into three broad groups: classical music (produced and performed by experts with a specific education), folk/world music (produced and performed by traditional cultures), and utility music (such as hymns and military marches, not primarily intended for commercial purposes). Popular music is a great mean for spreading culture, and a perfect ground where cultural practices and industry processes combine. In particular the cultural processes select novelties, broadly represented by means of underground music genres, and the industry tries to monetize, making them commercially successful. In the following description we include almost all the genres that reach commercial success and few of the underground genres that are related to them. Arguably the beginning of popular music is in the USA between 1880s and 1890s with spirituals, work and shout chants BIBREF9, that we classify half-way between world music and popular music. The first real popular music genres in the 1900s were ragtime, pioneer of piano blues and jazz, and gospel, derived from religious chants of afro-american communities and pioneer of soul and RnB. The 1910s saw the birth of tin pan alley (simple pop songs for piano composed by professionals) and dixieland jazz, a spontaneous melting pot of ragtime, classical, afroamerican and haitian music BIBREF10. In the 1920s, blues and hillbilly country became popular. The former was born as a form of expression of black communities and outcasts, while the latter was a form of entertainment of the white rural communities. Tin pan alley piano composers soon commercialized tracks in the style of blues, generating boogie-woogie as a reaction, an underground and very aggressive piano blues played by black musicians. In Chicago and New York jazz became more sophisticated and spread to Europe, where gipsy jazz became popular soon after. Both in US and Europe, the 1930s were dominated by swing, the most popular form of jazz, which was at the same time danceable, melanchonic, catchy and intelligent. In the US the west swing, a mellow and easy type of country music, became popular thanks to western movies. The 1940s in the US saw a revival of dixieland jazz, the rise of be-bop (one of the most mellow and intelligent forms of jazz), the advent of crooners (male pop singers) and the establishment of back-to-the-roots types of country music such as bluegrass, a reaction against west swing, modernity and electric guitars. In the underground there was honky-tonk, a sad kind of country music that will influence folk rock. In the 1950s rock and roll was created by black communities with the electric fusion of blues, boogie-woogie and hillbilly and soon commercialized for large white audiences. Beside this, many things happened: urban blues forged its modern sound using electric guitars and harmonicas; cool jazz, played also by white people, launched a more commercial and clean style; gospel influenced both doo-wop, (a-cappella music performed by groups of black singers imitating crooners) and RnB, where black female singers played with a jazz or blues band. The 1960s saw an explosion of genres: countrypolitan, an electric and easy form of country music, became the most commercialized genre in the US; the first independent labels (in particular the Motown) turned doo-wop into well-arranged and hyper-produced soul music with a good commercial success BIBREF11; ska, a form of dance music with a very typical offbeat, became popular outside of Jamaica; garage (and also surf) rock arose as the first forms of independent commercial rock music, sometimes aggressive and sometimes easy; in the UK, beat popularized a new style of hyper-produced rock music that had a very big commercial success; blues rock emerged as the mix of the two genres; teenypop was created in order to sell records to younger audiences; independent movements like beat generation and hippies helped the rise of folk rock and psychedelic rock respectively BIBREF12; funk emerged from soul and jazz (while jazz turned into the extremely complex free jazz as a reaction against the commercial cool jazz, but remained underground). In the 1970s progressive rock turned psychedelia into a more complex form, independent radios contribute to its diffusion as well as the popularity of songwriters, an evolution of folk singers that proliferated from latin america (nueva canción) to western Europe. In the meanwhile, TV became a new channel for music marketing , exploited by glam rock, that emerged as a form of pop rock music with a fake trasgressive image and eclectic arrangements; fusion jazz begun to include funk and psychedelic elements; the disillusion due to the end of hippie movement left angry and frustrated masses listening to hard rock and blues rock, that included anti-religious symbols and merged into heavy metal. Then garage and independent rock, fueled by anger and frustration, was commercialized as punk rock at the end of the decade, while disco music (a catchy and hyper-danceable version of soul and RnB) was played in famous clubs and linked to sex and fun, gathering the LGBT communities. The poorest black communities, kept out from the disco clubs, begun to perform in house-parties, giving rise to old skool rap, whose sampled sounds and rhythmic vocals were a great novelty but remained underground. The real novelties popularized in this decade were ambient (a very intelligent commercial downtempo music derived from classical music), reggae (which mixed ska, rock and folk and from Jamaica conquered the UK) and above all synth electronica, a type of industrial experimental music that became popular for its new sound and style, bridging the gap between rock and electronic music. This will deeply change the sound of the following decades BIBREF13. The 1980s begun with the rise of synth pop and new wave. The former, also referred to as ”new romantics”, was a popular music that mixed catchy rhythms with simple melodies and synthetic sounds while the latter was an hipster mix of glam rock and post-punk with a positive view (as opposed to the depressive mood of the real post-punk), with minor influences from synth electronica and reggae. The music industry created also glam metal for the heavy metal audiences, that reacted with extreme forms like thrash metal; a similar story happened with punk audiences, that soon moved to extreme forms like hardcore, which remained underground but highlighted a serious tensions between industry and the audiences that wanted spontaneous genres BIBREF14. In the meanwhile discopop produced a very catchy, easy and danceable music mix of disco, funk and synthetic sounds, that greatly improved the quality of records, yielding to one of the best selling genres in the whole popular music history. In a similar way smooth jazz (a mix of mellow and easy melodies with synthetic rhythmical bases) and soft adult (a mellow and easy form of pop) obtained a good commercial success. Techno music emerged as a new form of danceable synthetic and funky genre and hard rap became popular both in black and white audiences, while electro (break dance at the time) and (pioneering) house music remained underground for their too much innovative sampled sounds. In the 1990s alternative/grunge rock solved the tension between commercial and spontaneous genres with a style of rock that was at the same time aggressive, intelligent and easy to listen to. The same happened with skatepunk (a fast, happy and commercial form of rock) and rap metal (a mix of the two genres) while britpop continued the tradition of pop rock initiated with beat. RnB evolved into new jack swing (a form of softer, rhythmical and easy funk) and techno split into the commercial eurodance (a mix of techno and disco music with synthetic sounds, manipulated RnB vocals and strong beats) and the subculture of rave (an extremely aggressive form of techno played in secret parties and later in clubs), which helped the creation of goa trance, that new hippie communities used for accompany drug trips BIBREF15. An intelligent and slow mix of electro and RnB became popular as trip hop while an aggressive and extremely fast form of electro with reggae influences became popular as jungle/DnB. By the end of the decade the most commercially successful genres were dancepop (a form of pop that included elements of funk, disco and eurodance in a sexy image) and gangsta rap/hip hop that reached its stereotypical form and became mainstream, while independent labels (that produced many subgenres from shoegaze/indie rock to electro and house) remained in the underground. In the underground -but in latin america- there was also reggaetón, a latin form of rap. The rise of free download and later social networks websites in 2000s opened new channels for independent genres, that allowed the rise of grime (a type of electro mixing DnB and rap), dubstep (a very intelligent and slow mix of techno, DnB and electro low-fi samples), indietronica (a broad genre mixing intelligent indie rock, electro and a lot of minor influences) and later nu disco (a revival of stylish funk and disco updated with electro and house sounds) BIBREF16. In the meanwhile there were popular commercial genres like garage rock revival (that updated rock and punk with danceable beats), emo rock/post grunge (aggressive, easy and even more catchy), urban breaks (a form of RnB with heavy electro and rap influences) and above all electropop (the evolution of dancepop, that included elements of electro/house and consolidated the image of seductive female singers, also aimed at the youngest audiences of teens). Among those genres epic trance (an euphoric, aggressive and easy form of melodic techno) emerged from the biggest dedicated festivals and became mainstream with over-payed DJ-superstars BIBREF17. In the underground remained various forms of nu jazz, hardcore techno, metal and house music. Then in 2010s finally euro EDM house music (a form of sample-based and heavily danceable mix of house and electro) came out of underground communities and, borrowing the figure of DJ-superstar from trance, reached commercial success, but left underground communities unsatisfied (they were mostly producing complex electro, a mix of dubstep and avant-garde house). Also drumstep (a faster and aggressive version of dubstep, influenced by EDM and techno) and trap music (a form of dark and heavy techno rap) emerged from underground and had good commercial success. Genres like indiefolk (a modern and eclectic folk rock with country influences) and nu prog rock (another eclectic, experimental and aggressive form of rock with many influences from electro, metal and rap) had moderate success. The availability of websites for user-generated contents such as Youtube helped to popularize genres like electro reggaetón (latin rap with new influences from reggae and electro), cloud rap (an eclectic and intelligent form of rap with electro influences) and JK-pop (a broad label that stands for Japanese and Korean pop, but emerged from all over the world with common features: Youtubers that produce easy and catchy pop music with heavy influences from electropop, discopop and eurodance) BIBREF18. Moreover, technologies helped the creation of mainstream genres such as tropical house (a very melodic, soft and easy form of house music singed in an modern RnB style). In the underground there are yet many minor genres, such as bro country (an easy form of country played by young and attractive guys and influenced by electro and rap), future hardstyle (a form of aggressive trance with easy vocals similar to tropical house) and afrobeat (a form of rap that is popular in western africa with influences from reggaetón and traditional african music). From this description we can highlight some general and recurrent tendencies, for example the fact that music industry converts spontaneous novelties into commercial success, but when its products leave audiences frustrated (it happened with west swing, glam metal, cool jazz, punk and many others), they generate reactions in underground cultures, that trigger a change into more aggressive versions of the genre. In general, underground and spontaneous genres are more complex and avant-garde. Another pattern is that media allowed more and more local underground genres to influence the mainstream ones, ending in a combinatorial explosion of possible new genres, most of which remain underground. We suggest that we need to quantify a set of cross-genre characteristics in order to compute with data science techniques some weaker but possibly significative patterns that cannot be observed with qualitative methods. In the next section we define a quantitative methodology and we annotate a dataset to perform experiments. Data Description From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1. From a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions: genre features: genre scale (a score between 0 and 1 where 0=downtempo/industrial, 0.1=metal, 0.15=garage/punk/hardcore, 0.2=rock, 0.25=pop rock, 0.3=blues, 0.4=country, 0.5=pop/traditional, 0.55=gospel, 0.6=jazz, 0.65=latin, 0.7=RnB/soul/funk, 0.75=reggae/jamaican, 0.8=rap, 0.85=DnB, 0.9=electro/house, 0.95=EDM, 1=techno/trance) and category of the super-genre (as defined in figure FIGREF1) and influence variety 0.1=influence only from the same super-genre, 1=influences from all the supergenres perceived acoustic features: sound (0=acoustic, 0.35=amplified, 0.65=sampled/manipulated, 1=synthetic), vocal melody (1=melodic vocals, 0=rhythmical vocals/spoken words), vocal scream (1=screaming, 0=soft singing), vocal emotional (1=emotional vocals, 0=monotone vocals), virtuous (0.5=normal, 0=not technical at all, 1=very technical); richbass 1=the bass is loud and clear, 0=there is no bass sound; offbeat 1=the genre has a strong offbeat, 0=the genre has not offbeat time: decade (classes between 1900s and 2010s) and year representative of the time when the genre became meainstream place features: origin place 0=Australia, 0.025=west USA, 0.05=south USA, 0.075=north/east USA, 0.1=UK, 0.2=jamaica, 0.3=carribean, 0.4=latin america, 0.5=africa, 0.6=south EU, 0.65=north/east EU, 0.7=middle east, 0.8=India, 0.9=China/south asia, 1=Korea/north asia; place urban (0=the origin place is rural, 1=the origin place is urban), place poor (0=the origin place is poor, 1=the origin place is rich) media features: media mainstream (0=independent media, 1=mainstream media, 0.5=both), media live 0=sell recorded music, 1=sell live performance) emotion features: joy/sad (1=joy, 0=sad), anticipation/surprise (1=anticipation or already known, 0=surprise), anger/calm (1=anger, 0=calm). style features: novelty 0=derivative, 0.5=normal, 1=totally new characteristics and type retro 1=the genre is a revival, 0.5=normal, 0=the genre is not a revival, lyrics love/explicit 0.5=normal, 1=love lyrics, 0=explicit lyrics, style upbeat 1=extroverted and danceable, 0=introverted and depressive, style instrumental 1=totally instrumental, 0=totally singed, style eclecticism 1=includes many styles, 0=has a stereotypical style, style longsongs 0.5=radio format (3.30 minutes), 1=more than 6 minutes by average, 0=less than 1 minute by average; largebands 1=bands of 10 or more people, 0.1=just one musician; subculture 1=the audience one subculture or more, 0=the audience is the main culture; hedonism 1=the genre promotes hedonism, 0=the genre does not promote hedonism; protest 1=the genre promotes protest, 0=the genere does not promote protest; onlyblack 1=genere produced only by black communities, 0=genre produced only by white communities; ; 44beat 1=the genre has 4/4 beat, 0=the genre has other types of measures; outcasts 1=the audience is poor people, 0=the audience is rich people; dancing 1=the genre is for dancing, 0=the genre is for home listening; drugs 1=the audience use drugs, 0=the audience do not use drugs MUSIC features: mellow (1=slow and romantic, 0=fast and furious), sophisticated (1=culturally complex, 0=easy to understand), intense (1=aggressive and loud, 0=soft and relaxing), contemporary (1=rhythmical and catchy, 0=not rhythmical and old-fashioned), uncomplicated (1=simple and well-known, 0=strange and disgustive) We computed the agreement between the two annotators using Cronbach's alpha statistics BIBREF21. The average between all features is $\alpha =0.793$, which is good. Among the most agreed features there are genre, place, sound and MUSIC features. In particular, the genre scale got an excellent $\alpha =0.957$, meaning that the genre scale is a reliable measure. In the final annotation all the divergences between the two annotators were agreed upon and the scores were averaged or corrected accordingly. The final dataset is available to the scientific community. Experiments What are the tendencies that confirm or disconfirm previous findings? We noticed very interesting remarks just from the distributions of the features, reported in figure FIGREF11. We can see that most of the popular music genres have a novelty score between 0.5 and 0.65, which is medium-high. This confirms the findings of previous work about the optimal level of innovation and acceptance. It is interesting to note that almost all the popular genres come from an urban context, where the connections between communities are more likely to create innovations. Moreover, we can see that the distribution of mainstream media is bi-modal: this means that an important percentage of genres are popularized by means of underground or new media. This happened many times in music history, from the the free radios to the web of the user-generated content. Crucially, popular music genres strongly tend to be perceived as technically virtuous. Why the sound changed from acoustic to synthetic during the last century? To answer this question we used a correlation analysis with the sound feature as target. It emerged that the change towards sampled and synthetic sound is correlated to dancing, to intensity/aggressiveness, to a larger drug usage and to a large variety of infleunces, while it is negatively correlated to large bands and mellow tones. In summary a more synthetic sound allowed a more intense and danceable music, reducing the number of musicians (in other words reducing costs for the industry). How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases. Is it possible to predict future genres by means of the genre scale? To answer this question we used time series forecasting. In particular, we exploited all the features in the years from 1900 to 2010 to train a predictive model of the scores from 2011 to 2018. As the year of the genre label is arbitrary, predicted scores and labels can be not aligned, thus MAE or RSME are not suitable evaluation metrics. As evaluation metric we defined average accuracy as $a=\frac{\sum count(|l-h|<0.1)}{count(t)} $, where the label (l) and the prediction (h) can be anywhere within the year serie (t). Table TABREF13, shows the results of the prediction of genre scale for the years 2011 to 2018 with different algorithms: linear regression (LR), Support Vector Machine (SVM), multi layer perceptron (MPL), nearest neighbors (IBk), and a meta classifier (stacking) with SVM+MLP+IBk. The results reveal that the forecasting of music genres is a non-linear problem, that IBk predicts the closest sequence to the annotated one and that a meta classifier with nearest neighborsBIBREF22 is the most accurate in the prediction. Deep Learning algorithms does not perform well in this case because the dataset is not large enough. Last remark: feature reduction (from 41 to 14) does not affect the results obtained with IBk and meta classifiers, indicating that there is no curse of dimensionality. Conclusion Acknowledgments and Future We annotated and presented a new dataset for the computational analysis of popular music. Our preliminary studies confirm previous findings (there is an optimal level of novelty to become popular and this is more likely to happen in urban contexts) and reveal that audiences tend to like contemporary and intense music experiences. We also performed a back test for the prediction of future music genres in a time series, that turned out to be a non-linear problem. For the future we would like to update the corpus with more features about audience types and commercial success. This work has also been inspired by Music Map.
between 1900s and 2010s
3ebdc15480250f130cf8f5ab82b0595e4d870e2f
3ebdc15480250f130cf8f5ab82b0595e4d870e2f_0
Q: How many genres did they collect from? Text: Motivation, Background and Related Work Until recent times, the research in popular music was mostly bound to a non-computational approach BIBREF0 but the availability of new data, models and algorithms helped the rise of new research trends. Computational analysis of music structure BIBREF1 is focused on parsing and annotate patters in music files; computational music generation BIBREF2 trains systems able to generate songs with specific music styles; computational sociology of music analyzes databases annotated with metadata such as tempo, key, BPMs and similar (generally referred to as sonic features); even psychology of music use data to find new models. Recent papers in computational sociology investigated novelty in popular music, finding that artists who are highly culturally and geographically connected are more likely to create novel songs, especially when they span multiple genres, are women, or are in the early stages of their careers BIBREF3. Using the position in Billboard charts and the sonic features of more than 20K songs, it has been demonstrated that the songs exhibiting some degree of optimal differentiation in novelty are more likely to rise to the top of the charts BIBREF4. These findings offer very interesting perspectives on how popular culture impacts the competition of novel genres in cultural markets. Another problem addressed in this research field is the distinction between what is popular and what is significative to a musical context BIBREF5. Using a user-generated set of tags collected through an online music platform, it has been possible to compute a set of metrics, such as novelty, burst or duration, from a co-occurrence tag network relative to music albums, in order to find the tags that propagate more and the albums having a significative impact. Combining sonic features and topic extraction techniques from approximately 17K tracks, scholars demonstrate quantitative trends in harmonic and timbral properties that brought changes in music sound around 1964, 1983 and 1991 BIBREF6. Beside these research fields, there is a trend in the psychology of music that studies how the musical preferences are reflected in the dimensions of personality BIBREF7. From this kind of research emerged the MUSIC model BIBREF8, which found that genre preferences can be decomposed into five factors: Mellow (relaxed, slow, and romantic), Unpretentious, (easy, soft, well-known), Sophisticated (complex, intelligent or avant-garde), Intense (loud, aggressive, and tense) and Contemporary (catchy, rhythmic or danceable). Is it possible to find trends in the characteristics of the genres? And is it possible to predict the characteristics of future genres? To answer these questions, we produced a hand-crafted dataset with the intent to put together MUSIC, style and sonic features, annotated by music genre and indexed by time and decade. To do so, we collected a list of popular music genres by decade from Wikipedia and instructed annotators to score them. The paper is structured as follows: In section SECREF2 we provide a brief history of popular music, in section SECREF3 we describe the dataset and in section SECREF4 we provide the results of the experiments. In the end we draw some conclusions. Brief introduction to popular music We define ”popular music” as the music which finds appeal out of culturally closed music groups, also thanks to its commercial nature. Non-popular music can be divided into three broad groups: classical music (produced and performed by experts with a specific education), folk/world music (produced and performed by traditional cultures), and utility music (such as hymns and military marches, not primarily intended for commercial purposes). Popular music is a great mean for spreading culture, and a perfect ground where cultural practices and industry processes combine. In particular the cultural processes select novelties, broadly represented by means of underground music genres, and the industry tries to monetize, making them commercially successful. In the following description we include almost all the genres that reach commercial success and few of the underground genres that are related to them. Arguably the beginning of popular music is in the USA between 1880s and 1890s with spirituals, work and shout chants BIBREF9, that we classify half-way between world music and popular music. The first real popular music genres in the 1900s were ragtime, pioneer of piano blues and jazz, and gospel, derived from religious chants of afro-american communities and pioneer of soul and RnB. The 1910s saw the birth of tin pan alley (simple pop songs for piano composed by professionals) and dixieland jazz, a spontaneous melting pot of ragtime, classical, afroamerican and haitian music BIBREF10. In the 1920s, blues and hillbilly country became popular. The former was born as a form of expression of black communities and outcasts, while the latter was a form of entertainment of the white rural communities. Tin pan alley piano composers soon commercialized tracks in the style of blues, generating boogie-woogie as a reaction, an underground and very aggressive piano blues played by black musicians. In Chicago and New York jazz became more sophisticated and spread to Europe, where gipsy jazz became popular soon after. Both in US and Europe, the 1930s were dominated by swing, the most popular form of jazz, which was at the same time danceable, melanchonic, catchy and intelligent. In the US the west swing, a mellow and easy type of country music, became popular thanks to western movies. The 1940s in the US saw a revival of dixieland jazz, the rise of be-bop (one of the most mellow and intelligent forms of jazz), the advent of crooners (male pop singers) and the establishment of back-to-the-roots types of country music such as bluegrass, a reaction against west swing, modernity and electric guitars. In the underground there was honky-tonk, a sad kind of country music that will influence folk rock. In the 1950s rock and roll was created by black communities with the electric fusion of blues, boogie-woogie and hillbilly and soon commercialized for large white audiences. Beside this, many things happened: urban blues forged its modern sound using electric guitars and harmonicas; cool jazz, played also by white people, launched a more commercial and clean style; gospel influenced both doo-wop, (a-cappella music performed by groups of black singers imitating crooners) and RnB, where black female singers played with a jazz or blues band. The 1960s saw an explosion of genres: countrypolitan, an electric and easy form of country music, became the most commercialized genre in the US; the first independent labels (in particular the Motown) turned doo-wop into well-arranged and hyper-produced soul music with a good commercial success BIBREF11; ska, a form of dance music with a very typical offbeat, became popular outside of Jamaica; garage (and also surf) rock arose as the first forms of independent commercial rock music, sometimes aggressive and sometimes easy; in the UK, beat popularized a new style of hyper-produced rock music that had a very big commercial success; blues rock emerged as the mix of the two genres; teenypop was created in order to sell records to younger audiences; independent movements like beat generation and hippies helped the rise of folk rock and psychedelic rock respectively BIBREF12; funk emerged from soul and jazz (while jazz turned into the extremely complex free jazz as a reaction against the commercial cool jazz, but remained underground). In the 1970s progressive rock turned psychedelia into a more complex form, independent radios contribute to its diffusion as well as the popularity of songwriters, an evolution of folk singers that proliferated from latin america (nueva canción) to western Europe. In the meanwhile, TV became a new channel for music marketing , exploited by glam rock, that emerged as a form of pop rock music with a fake trasgressive image and eclectic arrangements; fusion jazz begun to include funk and psychedelic elements; the disillusion due to the end of hippie movement left angry and frustrated masses listening to hard rock and blues rock, that included anti-religious symbols and merged into heavy metal. Then garage and independent rock, fueled by anger and frustration, was commercialized as punk rock at the end of the decade, while disco music (a catchy and hyper-danceable version of soul and RnB) was played in famous clubs and linked to sex and fun, gathering the LGBT communities. The poorest black communities, kept out from the disco clubs, begun to perform in house-parties, giving rise to old skool rap, whose sampled sounds and rhythmic vocals were a great novelty but remained underground. The real novelties popularized in this decade were ambient (a very intelligent commercial downtempo music derived from classical music), reggae (which mixed ska, rock and folk and from Jamaica conquered the UK) and above all synth electronica, a type of industrial experimental music that became popular for its new sound and style, bridging the gap between rock and electronic music. This will deeply change the sound of the following decades BIBREF13. The 1980s begun with the rise of synth pop and new wave. The former, also referred to as ”new romantics”, was a popular music that mixed catchy rhythms with simple melodies and synthetic sounds while the latter was an hipster mix of glam rock and post-punk with a positive view (as opposed to the depressive mood of the real post-punk), with minor influences from synth electronica and reggae. The music industry created also glam metal for the heavy metal audiences, that reacted with extreme forms like thrash metal; a similar story happened with punk audiences, that soon moved to extreme forms like hardcore, which remained underground but highlighted a serious tensions between industry and the audiences that wanted spontaneous genres BIBREF14. In the meanwhile discopop produced a very catchy, easy and danceable music mix of disco, funk and synthetic sounds, that greatly improved the quality of records, yielding to one of the best selling genres in the whole popular music history. In a similar way smooth jazz (a mix of mellow and easy melodies with synthetic rhythmical bases) and soft adult (a mellow and easy form of pop) obtained a good commercial success. Techno music emerged as a new form of danceable synthetic and funky genre and hard rap became popular both in black and white audiences, while electro (break dance at the time) and (pioneering) house music remained underground for their too much innovative sampled sounds. In the 1990s alternative/grunge rock solved the tension between commercial and spontaneous genres with a style of rock that was at the same time aggressive, intelligent and easy to listen to. The same happened with skatepunk (a fast, happy and commercial form of rock) and rap metal (a mix of the two genres) while britpop continued the tradition of pop rock initiated with beat. RnB evolved into new jack swing (a form of softer, rhythmical and easy funk) and techno split into the commercial eurodance (a mix of techno and disco music with synthetic sounds, manipulated RnB vocals and strong beats) and the subculture of rave (an extremely aggressive form of techno played in secret parties and later in clubs), which helped the creation of goa trance, that new hippie communities used for accompany drug trips BIBREF15. An intelligent and slow mix of electro and RnB became popular as trip hop while an aggressive and extremely fast form of electro with reggae influences became popular as jungle/DnB. By the end of the decade the most commercially successful genres were dancepop (a form of pop that included elements of funk, disco and eurodance in a sexy image) and gangsta rap/hip hop that reached its stereotypical form and became mainstream, while independent labels (that produced many subgenres from shoegaze/indie rock to electro and house) remained in the underground. In the underground -but in latin america- there was also reggaetón, a latin form of rap. The rise of free download and later social networks websites in 2000s opened new channels for independent genres, that allowed the rise of grime (a type of electro mixing DnB and rap), dubstep (a very intelligent and slow mix of techno, DnB and electro low-fi samples), indietronica (a broad genre mixing intelligent indie rock, electro and a lot of minor influences) and later nu disco (a revival of stylish funk and disco updated with electro and house sounds) BIBREF16. In the meanwhile there were popular commercial genres like garage rock revival (that updated rock and punk with danceable beats), emo rock/post grunge (aggressive, easy and even more catchy), urban breaks (a form of RnB with heavy electro and rap influences) and above all electropop (the evolution of dancepop, that included elements of electro/house and consolidated the image of seductive female singers, also aimed at the youngest audiences of teens). Among those genres epic trance (an euphoric, aggressive and easy form of melodic techno) emerged from the biggest dedicated festivals and became mainstream with over-payed DJ-superstars BIBREF17. In the underground remained various forms of nu jazz, hardcore techno, metal and house music. Then in 2010s finally euro EDM house music (a form of sample-based and heavily danceable mix of house and electro) came out of underground communities and, borrowing the figure of DJ-superstar from trance, reached commercial success, but left underground communities unsatisfied (they were mostly producing complex electro, a mix of dubstep and avant-garde house). Also drumstep (a faster and aggressive version of dubstep, influenced by EDM and techno) and trap music (a form of dark and heavy techno rap) emerged from underground and had good commercial success. Genres like indiefolk (a modern and eclectic folk rock with country influences) and nu prog rock (another eclectic, experimental and aggressive form of rock with many influences from electro, metal and rap) had moderate success. The availability of websites for user-generated contents such as Youtube helped to popularize genres like electro reggaetón (latin rap with new influences from reggae and electro), cloud rap (an eclectic and intelligent form of rap with electro influences) and JK-pop (a broad label that stands for Japanese and Korean pop, but emerged from all over the world with common features: Youtubers that produce easy and catchy pop music with heavy influences from electropop, discopop and eurodance) BIBREF18. Moreover, technologies helped the creation of mainstream genres such as tropical house (a very melodic, soft and easy form of house music singed in an modern RnB style). In the underground there are yet many minor genres, such as bro country (an easy form of country played by young and attractive guys and influenced by electro and rap), future hardstyle (a form of aggressive trance with easy vocals similar to tropical house) and afrobeat (a form of rap that is popular in western africa with influences from reggaetón and traditional african music). From this description we can highlight some general and recurrent tendencies, for example the fact that music industry converts spontaneous novelties into commercial success, but when its products leave audiences frustrated (it happened with west swing, glam metal, cool jazz, punk and many others), they generate reactions in underground cultures, that trigger a change into more aggressive versions of the genre. In general, underground and spontaneous genres are more complex and avant-garde. Another pattern is that media allowed more and more local underground genres to influence the mainstream ones, ending in a combinatorial explosion of possible new genres, most of which remain underground. We suggest that we need to quantify a set of cross-genre characteristics in order to compute with data science techniques some weaker but possibly significative patterns that cannot be observed with qualitative methods. In the next section we define a quantitative methodology and we annotate a dataset to perform experiments. Data Description From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1. From a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions: genre features: genre scale (a score between 0 and 1 where 0=downtempo/industrial, 0.1=metal, 0.15=garage/punk/hardcore, 0.2=rock, 0.25=pop rock, 0.3=blues, 0.4=country, 0.5=pop/traditional, 0.55=gospel, 0.6=jazz, 0.65=latin, 0.7=RnB/soul/funk, 0.75=reggae/jamaican, 0.8=rap, 0.85=DnB, 0.9=electro/house, 0.95=EDM, 1=techno/trance) and category of the super-genre (as defined in figure FIGREF1) and influence variety 0.1=influence only from the same super-genre, 1=influences from all the supergenres perceived acoustic features: sound (0=acoustic, 0.35=amplified, 0.65=sampled/manipulated, 1=synthetic), vocal melody (1=melodic vocals, 0=rhythmical vocals/spoken words), vocal scream (1=screaming, 0=soft singing), vocal emotional (1=emotional vocals, 0=monotone vocals), virtuous (0.5=normal, 0=not technical at all, 1=very technical); richbass 1=the bass is loud and clear, 0=there is no bass sound; offbeat 1=the genre has a strong offbeat, 0=the genre has not offbeat time: decade (classes between 1900s and 2010s) and year representative of the time when the genre became meainstream place features: origin place 0=Australia, 0.025=west USA, 0.05=south USA, 0.075=north/east USA, 0.1=UK, 0.2=jamaica, 0.3=carribean, 0.4=latin america, 0.5=africa, 0.6=south EU, 0.65=north/east EU, 0.7=middle east, 0.8=India, 0.9=China/south asia, 1=Korea/north asia; place urban (0=the origin place is rural, 1=the origin place is urban), place poor (0=the origin place is poor, 1=the origin place is rich) media features: media mainstream (0=independent media, 1=mainstream media, 0.5=both), media live 0=sell recorded music, 1=sell live performance) emotion features: joy/sad (1=joy, 0=sad), anticipation/surprise (1=anticipation or already known, 0=surprise), anger/calm (1=anger, 0=calm). style features: novelty 0=derivative, 0.5=normal, 1=totally new characteristics and type retro 1=the genre is a revival, 0.5=normal, 0=the genre is not a revival, lyrics love/explicit 0.5=normal, 1=love lyrics, 0=explicit lyrics, style upbeat 1=extroverted and danceable, 0=introverted and depressive, style instrumental 1=totally instrumental, 0=totally singed, style eclecticism 1=includes many styles, 0=has a stereotypical style, style longsongs 0.5=radio format (3.30 minutes), 1=more than 6 minutes by average, 0=less than 1 minute by average; largebands 1=bands of 10 or more people, 0.1=just one musician; subculture 1=the audience one subculture or more, 0=the audience is the main culture; hedonism 1=the genre promotes hedonism, 0=the genre does not promote hedonism; protest 1=the genre promotes protest, 0=the genere does not promote protest; onlyblack 1=genere produced only by black communities, 0=genre produced only by white communities; ; 44beat 1=the genre has 4/4 beat, 0=the genre has other types of measures; outcasts 1=the audience is poor people, 0=the audience is rich people; dancing 1=the genre is for dancing, 0=the genre is for home listening; drugs 1=the audience use drugs, 0=the audience do not use drugs MUSIC features: mellow (1=slow and romantic, 0=fast and furious), sophisticated (1=culturally complex, 0=easy to understand), intense (1=aggressive and loud, 0=soft and relaxing), contemporary (1=rhythmical and catchy, 0=not rhythmical and old-fashioned), uncomplicated (1=simple and well-known, 0=strange and disgustive) We computed the agreement between the two annotators using Cronbach's alpha statistics BIBREF21. The average between all features is $\alpha =0.793$, which is good. Among the most agreed features there are genre, place, sound and MUSIC features. In particular, the genre scale got an excellent $\alpha =0.957$, meaning that the genre scale is a reliable measure. In the final annotation all the divergences between the two annotators were agreed upon and the scores were averaged or corrected accordingly. The final dataset is available to the scientific community. Experiments What are the tendencies that confirm or disconfirm previous findings? We noticed very interesting remarks just from the distributions of the features, reported in figure FIGREF11. We can see that most of the popular music genres have a novelty score between 0.5 and 0.65, which is medium-high. This confirms the findings of previous work about the optimal level of innovation and acceptance. It is interesting to note that almost all the popular genres come from an urban context, where the connections between communities are more likely to create innovations. Moreover, we can see that the distribution of mainstream media is bi-modal: this means that an important percentage of genres are popularized by means of underground or new media. This happened many times in music history, from the the free radios to the web of the user-generated content. Crucially, popular music genres strongly tend to be perceived as technically virtuous. Why the sound changed from acoustic to synthetic during the last century? To answer this question we used a correlation analysis with the sound feature as target. It emerged that the change towards sampled and synthetic sound is correlated to dancing, to intensity/aggressiveness, to a larger drug usage and to a large variety of infleunces, while it is negatively correlated to large bands and mellow tones. In summary a more synthetic sound allowed a more intense and danceable music, reducing the number of musicians (in other words reducing costs for the industry). How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases. Is it possible to predict future genres by means of the genre scale? To answer this question we used time series forecasting. In particular, we exploited all the features in the years from 1900 to 2010 to train a predictive model of the scores from 2011 to 2018. As the year of the genre label is arbitrary, predicted scores and labels can be not aligned, thus MAE or RSME are not suitable evaluation metrics. As evaluation metric we defined average accuracy as $a=\frac{\sum count(|l-h|<0.1)}{count(t)} $, where the label (l) and the prediction (h) can be anywhere within the year serie (t). Table TABREF13, shows the results of the prediction of genre scale for the years 2011 to 2018 with different algorithms: linear regression (LR), Support Vector Machine (SVM), multi layer perceptron (MPL), nearest neighbors (IBk), and a meta classifier (stacking) with SVM+MLP+IBk. The results reveal that the forecasting of music genres is a non-linear problem, that IBk predicts the closest sequence to the annotated one and that a meta classifier with nearest neighborsBIBREF22 is the most accurate in the prediction. Deep Learning algorithms does not perform well in this case because the dataset is not large enough. Last remark: feature reduction (from 41 to 14) does not affect the results obtained with IBk and meta classifiers, indicating that there is no curse of dimensionality. Conclusion Acknowledgments and Future We annotated and presented a new dataset for the computational analysis of popular music. Our preliminary studies confirm previous findings (there is an optimal level of novelty to become popular and this is more likely to happen in urban contexts) and reveal that audiences tend to like contemporary and intense music experiences. We also performed a back test for the prediction of future music genres in a time series, that turned out to be a non-linear problem. For the future we would like to update the corpus with more features about audience types and commercial success. This work has also been inspired by Music Map.
77 genres
bbc58b193c08ccb2a1e8235a36273785a3b375fb
bbc58b193c08ccb2a1e8235a36273785a3b375fb_0
Q: Does the paper mention other works proposing methods to detect anglicisms in Spanish? Text: Introduction The study of English influence in the Spanish language has been a hot topic in Hispanic linguistics for decades, particularly concerning lexical borrowing or anglicisms BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Lexical borrowing is a phenomenon that affects all languages and constitutes a productive mechanism for word-formation, especially in the press. chesleypaulapredicting2010 estimated that a reader of French newspapers encountered a new lexical borrowing for every 1,000 words. In Chilean newspapers, lexical borrowings account for approximately 30% of neologisms, 80% of those corresponding to English loanwords BIBREF7. Detecting lexical borrowings is relevant both for lexicographic purposes and for NLP downstream tasks BIBREF8, BIBREF9. However, strategies to track and register lexical borrowings have traditionally relied on manual review of corpora. In this paper we present: (1) a corpus of newspaper headlines in European Spanish annotated with emerging anglicisms and (2) a CRF baseline model for anglicism automatic extraction in Spanish newswire. Related Work Corpus-based studies of English borrowings in Spanish media have traditionally relied on manual evaluation of either previously compiled general corpora such as CREA BIBREF10, BIBREF11, BIBREF12, BIBREF13, either new tailor-made corpora designed to analyze specific genres, varieties or phenomena BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish. Work within the code-switching community has also dealt with language identification on multilingual corpora. Due to the nature of code-switching, these models have primarily focused on oral copora and social media datasets BIBREF22, BIBREF23, BIBREF24. In the last shared task of language identification in code-switched data BIBREF23, approaches to English-Spanish included CRFs models BIBREF25, BIBREF26, BIBREF27, BIBREF28, logistic regression BIBREF29 and LSTMs models BIBREF30, BIBREF31. The scope and nature of lexical borrowing is, however, somewhat different to that of code-switching. In fact, applying code-switching models to lexical borrowing detection has previously proved to be unsuccessful, as they tend to overestimate the number of anglicisms BIBREF32. In the next section we address the differences between both phenomena and set the scope of this project. Anglicism: Scope of the Phenomenon Linguistic borrowing can be defined as the transference of linguistic elements between two languages. Borrowing and code-switching have frequently been described as a continuum BIBREF33, with a fuzzy frontier between the two. As a result, a precise definition of what borrowing is remains elusive BIBREF34 and some authors prefer to talk about code-mixing in general BIBREF35 or “lone other-language incorporations" BIBREF36. Lexical borrowing in particular involves the incorporation of single lexical units from one language into another language and is usually accompanied by morphological and phonological modification to conform with the patterns of the recipient language BIBREF37, BIBREF38. By definition, code-switches are not integrated into a recipient language, unlike established loanwords BIBREF39. While code-switches are usually fluent multiword interferences that normally comply with grammatical restrictions in both languages and that are produced by bilingual speakers in bilingual discourses, lexical borrowings are words used by monolingual individuals that eventually become lexicalized and assimilated as part of the recipient language lexicon until the knowledge of “foreign" origin disappears BIBREF40. In terms of approaching the problem, automatic code-switching identification has been framed as a sequence modeling problem where every token receives a language ID label (as in a POS-tagging task). Borrowing detection, on the other hand, while it can also be transformed into a sequence labeling problem, is an extraction task, where only certain spans of texts will be labeled (in the fashion of a NER task). Various typologies have been proposed that aim to classify borrowings according to different criteria, both with a cross-linguistic perspective and also specifically aimed to characterize English inclusions in Spanish BIBREF34, BIBREF41, BIBREF42, BIBREF5. In this work, we will be focusing on unassimilated lexical borrowings (sometimes called foreignisms), i.e. words from English origin that are introduced into Spanish without any morphological or orthographic adaptation. Corpus description and annotation ::: Corpus description In this subsection we describe the characteristics of the corpus. We first introduce the main corpus, with the usual train/development/test split that was used to train, tune and evaluate the model. We then present an additional test set that was designed to assess the performance of the model on more naturalistic data. Corpus description and annotation ::: Corpus description ::: Main Corpus The main corpus consists of a collection of monolingual newspaper headlines written in European Spanish. The corpus contains 16,553 headlines, which amounts to 244,114 tokens. Out of those 16,553 headlines, 1,109 contain at least one anglicism. The total number of anglicisms is 1,176 (most of them are a single word, although some of them were multiword expressions). The corpus was divided into training, development and test set. The proportions of headlines, tokens and anglicisms in each corpus split can be found in Table TABREF6. The headlines in this corpus come from the Spanish newspaper eldiario.es, a progressive online newspaper based in Spain. eldiario.es is one of the main national newspapers from Spain and, to the best of our knowledge, the only one that publishes its content under a Creative Commons license, which made it ideal for making the corpus publicly available. The headlines were extracted from the newspaper website through web scraping and range from September 2012 to January 2020. Only the following sections were included: economy, technology, lifestyle, music, TV and opinion. These sections were chosen as they were the most likely to contain anglicisms. The proportion of headlines with anglicisms per section can be found in Table TABREF7. Using headlines (instead of full articles) was beneficial for several reasons. First of all, annotating a headline is faster and easier than annotating a full article; this helps ensure that a wider variety of topics will be covered in the corpus. Secondly, anglicisms are abundant in headlines, because they are frequently used as a way of calling the attention of the reader BIBREF43. Finally, borrowings that make it to the headline are likely to be particularly salient or relevant, and therefore are good candidates for being extracted and tracked. Corpus description and annotation ::: Corpus description ::: Supplemental Test Set In addition to the usual train/development/test split we have just presented, a supplemental test set of 5,017 headlines was collected. The headlines included in this additional test set also belong to eldiario.es. These headlines were retrieved daily through RSS during February 2020 and included all sections from the newspaper. The headlines in the supplemental corpus therefore do not overlap in time with the main corpus and include more sections. The number of headlines, tokens and anglicisms in the supplemental test set can be found in Table TABREF6. The motivation behind this supplemental test set is to assess the model performance on more naturalistic data, as the headlines in the supplemental corpus (1) belong to the future of the main corpus and (2) come from a less borrowing-dense sample. This supplemental test set better mimics the real scenario that an actual anglicism extractor would face and can be used to assess how well the model generalizes to detect anglicisms in any section of the daily news, which is ultimately the aim of this project. Corpus description and annotation ::: Annotation guidelines The term anglicism covers a wide range of linguistic phenomena. Following the typology proposed by gomez1997towards, we focused on direct, unadapted, emerging Anglicisms, i.e. lexical borrowings from the English language into Spanish that have recently been imported and that have still not been assimilated into Spanish. Other phenomena such as semantic calques, syntactic anglicisms, acronyms and proper names were considered beyond the scope of this annotation project. Lexical borrowings can be adapted (the spelling of the word is modified to comply with the phonological and orthographic patterns of the recipient language) or unadapted (the word preserves its original spelling). For this annotation task, adapted borrowings were ignored and only unadapted borrowings were annotated. Therefore, Spanish adaptations of anglicisms like fútbol (from football), mitin (from meeting) and such were not annotated as borrowings. Similarly, words derived from foreign lexemes that do not comply with Spanish orthotactics but that have been morphologically derived following the Spanish paradigm (hacktivista, hackear, shakespeariano) were not annotated either. However, pseudo-anglicisms (words that are formed as if they were English, but do not exist in English, such as footing or balconing) were annotated. Words that were not adapted but whose original spelling complies with graphophonological rules of Spanish (and are therefore unlikely to be ever adapted, such as web, internet, fan, club, videoclip) were annotated or not depending on how recent or emergent they were. After all, a word like club, that has been around in Spanish language for centuries, cannot be considered emergent anymore and, for this project, would not be as interesting to retrieve as real emerging anglicisms. The notion of emergent is, however, time-dependent and quite subjective: in order to determine which unadapted, graphophonologically acceptable borrowings were to be annotated, the online version of the Diccionario de la lengua española dle was consulted. This dictionary is compiled by the Royal Spanish Academy, a prescriptive institution on Spanish language. This decision was motivated by the fact that, if a borrowing was already registered by this dictionary (that has conservative approach to language change) and is considered assimilated (that is, the institution recommended no italics or quotation marks to write that word) then it could be inferred that the word was not emergent anymore. Although the previous guidelines covered most cases, they proved insufficient. Some anglicisms were unadapted (they preserved their original spelling), unacceptable according to the Spanish graphophonological rules, and yet did not satisfy the condition of being emergent. That was the case of words like jazz or whisky, words that do not comply with Spanish graphophonological rules but that were imported decades ago, cannot be considered emergent anymore and are unlikely to ever be adapted into the Spanish spelling system. To adjudicate these examples on those cases, the criterion of pragmatic markedness proposed by winter2012proposing (that distinguishes between catachrestic and non-catachrestic borrowing) was applied: if a borrowing was not adapted (i.e. its form remained exactly as it came from English) but referred to a particular invention or innovation that came via the English language, that was not perceived as new anymore and that had never competed with a Spanish equivalent, then it was ignored. This criteria proved to be extremely useful to deal with old unadapted anglicisms in the fields of music and food. Figure 1 summarizes the decision steps followed during the annotation process. The corpus was annotated by a native speaker of Spanish using Doccano doccano. The annotation tagset includes two labels: ENG, to annotate the English borrowings just described, and OTHER. This OTHER tag was used to tag lexical borrowings from languages other than English. After all, although English is today by far the most prevalent donor of borrowings, there are other languages that also provide new borrowings to Spanish. Furthermore, the tag OTHER allows to annotate borrowings such as première or tempeh, borrowings that etymologically do not come from English but that have entered the Spanish language via English influence, even when their spelling is very different to English borrowings. In general, we considered that having such a tag could also help assess how successful a classifier is detecting foreign borrowings in general in Spanish newswire (without having to create a label for every possible donor language, as the number of examples would be too sparse). In total, the training set contained 40 entities labeled as OTHER, the development set contained 14 and the test set contained 13. The supplemental test set contained 35 OTHER entities. Baseline Model A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24. The model was built using pycrfsuite korobov2014python, the Python wrapper for crfsuite CRFsuite that implements CRF for labeling sequential data. It also used the Token and Span utilities from spaCy library honnibal2017spacy. The following handcrafted features were used for the model: Bias feature Token feature Uppercase feature (y/n) Titlecase feature (y/n) Character trigram feature Quotation feature (y/n) Word suffix feature (last three characters) POS tag (provided by spaCy utilities) Word shape (provided by spaCy utilities) Word embedding (see Table TABREF26) Given that anglicisms can be multiword expressions (such as best seller, big data) and that those units should be treated as one borrowing and not as two independent borrowings, we used multi-token BIO encoding to denote the boundaries of each span BIBREF44. A window of two tokens in each direction was set for the feature extractor. The algorithm used was gradient descent with the L-BFGS method. The model was tuned on the development set doing grid search; the hyperparameters considered were c1 (L1 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), c2 (L2 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), embedding scaling ($0.5$, $1.0$, $2.0$, $4.0$), and embedding type bojanowski2017enriching,josecanete20193255001,cardellinoSBWCE,grave2018learning,honnibal2017spacy,perezfasttext,perezglove (see Table TABREF26). The best results were obtained with c1 = $0.05$, c2 = $0.01$, scaling = $0.5$ and word2vec Spanish embeddings by cardellinoSBWCE. The threshold for the stopping criterion delta was selected through observing the loss during preliminary experiments (delta = $1\mathrm {e}-3$). In order to assess the significance of the the handcrafted features, a feature ablation study was done on the tuned model, ablating one feature at a time and testing on the development set. Due to the scarcity of spans labeled with the OTHER tag on the development set (only 14) and given that the main purpose of the model is to detect anglicisms, the baseline model was run ignoring the OTHER tag both during tuning and the feature ablation experiments. Table TABREF27 displays the results on the development set with all features and for the different feature ablation runs. The results show that all features proposed for the baseline model contribute to the results, with the character trigram feature being the one that has the biggest impact on the feature ablation study. Results The baseline model was then run on the test set and the supplemental test set with the set of features and hyperparameters mentioned on Section SECREF5 Table TABREF28 displays the results obtained. The model was run both with and without the OTHER tag. The metrics for ENG display the results obtained only for the spans labeled as anglicisms; the metrics for OTHER display the results obtained for any borrowing other than anglicisms. The metrics for BORROWING discard the type of label and consider correct any labeled span that has correct boundaries, regardless of the label type (so any type of borrowing, regardless if it is ENG or OTHER). In all cases, only full matches were considered correct and no credit was given to partial matching, i.e. if only fake in fake news was retrieved, it was considered wrong and no partial score was given. Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences. Comparing the results with and without the OTHER tag, it seems that including it on the development and test set produces worse results (or they remain roughly the same, at best). However, the best precision result on the supplemental test was obtained when including the OTHER tag and considering both ENG and OTHER spans as BORROWING (precision = 87.62). This is caused by the fact that, while the development and test set were compiled from anglicism-rich newspaper sections (similar to the training set), the supplemental test set contained headlines from all the sections in the newspaper, and therefore included borrowings from other languages such as Catalan, Basque or French. When running the model without the OTHER tag on the supplemental test set, these non-English borrowings were labeled as anglicisms by the model (after all, their spelling does not resemble Spanish spelling), damaging the precision score. When the OTHER tag was included, these non-English borrowings got correctly labeled as OTHER, improving the precision score. This proves that, although the OTHER tag might be irrelevant or even damaging when testing on the development or test set, it can be useful when testing on more naturalistic data, such as the one in the supplemental test set. Concerning errors, two types of errors were recurrent among all sets: long titles of songs, films or series written in English were a source of false positives, as the model tended to mistake some of the uncapitalized words in the title for anglicisms (for example, it darker in “`You want it darker', la oscura y brillante despedida de Leonard Cohen"). On the other hand, anglicisms that appear on the first position of the sentence (and were, therefore, capitalized) were consistently ignored (as the model probably assumed they were named entities) and produced a high number of false negatives (for example, vamping in “Vamping: la recurrente leyenda urbana de la luz azul `asesina'"). The results on Table TABREF28 cannot, however, be compared to the ones reported by previous work: the metric that we report is span F-measure, as the evaluation was done on span level (instead of token level) and credit was only given to full matches. Secondly, there was no Spanish tag assigned to non-borrowings, that means that no credit was given if a Spanish token was identified as such. Future Work This is an on-going project. The corpus we have just presented is a first step towards the development of an extractor of emerging anglicisms in the Spanish press. Future work includes: assessing whether to keep the OTHER tag, improving the baseline model (particularly to improve recall), assessing the suitability and contribution of different sets of features and exploring different models. In terms of the corpus development, the training set is now closed and stable, but the test set could potentially be increased in order to have more and more diverse anglicisms. Conclusions In this paper we have presented a new corpus of 21,570 newspaper headlines written in European Spanish. The corpus is annotated with emergent anglicisms and, up to our very best knowledge, is the first corpus of this type to be released publicly. We have presented the annotation scope, tagset and guidelines, and we have introduced a CRF baseline model for anglicism extraction trained with the described corpus. The results obtained show that the the corpus and baseline model are appropriate for automatic anglicism extraction. Acknowledgements The author would like to thank Constantine Lignos for his feedback and advice on this project. Language Resource References lrec
Yes
3c34187a248d179856b766e9534075da1aa5d1cf
3c34187a248d179856b766e9534075da1aa5d1cf_0
Q: What is the performance of the CRF model on the task described? Text: Introduction The study of English influence in the Spanish language has been a hot topic in Hispanic linguistics for decades, particularly concerning lexical borrowing or anglicisms BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Lexical borrowing is a phenomenon that affects all languages and constitutes a productive mechanism for word-formation, especially in the press. chesleypaulapredicting2010 estimated that a reader of French newspapers encountered a new lexical borrowing for every 1,000 words. In Chilean newspapers, lexical borrowings account for approximately 30% of neologisms, 80% of those corresponding to English loanwords BIBREF7. Detecting lexical borrowings is relevant both for lexicographic purposes and for NLP downstream tasks BIBREF8, BIBREF9. However, strategies to track and register lexical borrowings have traditionally relied on manual review of corpora. In this paper we present: (1) a corpus of newspaper headlines in European Spanish annotated with emerging anglicisms and (2) a CRF baseline model for anglicism automatic extraction in Spanish newswire. Related Work Corpus-based studies of English borrowings in Spanish media have traditionally relied on manual evaluation of either previously compiled general corpora such as CREA BIBREF10, BIBREF11, BIBREF12, BIBREF13, either new tailor-made corpora designed to analyze specific genres, varieties or phenomena BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish. Work within the code-switching community has also dealt with language identification on multilingual corpora. Due to the nature of code-switching, these models have primarily focused on oral copora and social media datasets BIBREF22, BIBREF23, BIBREF24. In the last shared task of language identification in code-switched data BIBREF23, approaches to English-Spanish included CRFs models BIBREF25, BIBREF26, BIBREF27, BIBREF28, logistic regression BIBREF29 and LSTMs models BIBREF30, BIBREF31. The scope and nature of lexical borrowing is, however, somewhat different to that of code-switching. In fact, applying code-switching models to lexical borrowing detection has previously proved to be unsuccessful, as they tend to overestimate the number of anglicisms BIBREF32. In the next section we address the differences between both phenomena and set the scope of this project. Anglicism: Scope of the Phenomenon Linguistic borrowing can be defined as the transference of linguistic elements between two languages. Borrowing and code-switching have frequently been described as a continuum BIBREF33, with a fuzzy frontier between the two. As a result, a precise definition of what borrowing is remains elusive BIBREF34 and some authors prefer to talk about code-mixing in general BIBREF35 or “lone other-language incorporations" BIBREF36. Lexical borrowing in particular involves the incorporation of single lexical units from one language into another language and is usually accompanied by morphological and phonological modification to conform with the patterns of the recipient language BIBREF37, BIBREF38. By definition, code-switches are not integrated into a recipient language, unlike established loanwords BIBREF39. While code-switches are usually fluent multiword interferences that normally comply with grammatical restrictions in both languages and that are produced by bilingual speakers in bilingual discourses, lexical borrowings are words used by monolingual individuals that eventually become lexicalized and assimilated as part of the recipient language lexicon until the knowledge of “foreign" origin disappears BIBREF40. In terms of approaching the problem, automatic code-switching identification has been framed as a sequence modeling problem where every token receives a language ID label (as in a POS-tagging task). Borrowing detection, on the other hand, while it can also be transformed into a sequence labeling problem, is an extraction task, where only certain spans of texts will be labeled (in the fashion of a NER task). Various typologies have been proposed that aim to classify borrowings according to different criteria, both with a cross-linguistic perspective and also specifically aimed to characterize English inclusions in Spanish BIBREF34, BIBREF41, BIBREF42, BIBREF5. In this work, we will be focusing on unassimilated lexical borrowings (sometimes called foreignisms), i.e. words from English origin that are introduced into Spanish without any morphological or orthographic adaptation. Corpus description and annotation ::: Corpus description In this subsection we describe the characteristics of the corpus. We first introduce the main corpus, with the usual train/development/test split that was used to train, tune and evaluate the model. We then present an additional test set that was designed to assess the performance of the model on more naturalistic data. Corpus description and annotation ::: Corpus description ::: Main Corpus The main corpus consists of a collection of monolingual newspaper headlines written in European Spanish. The corpus contains 16,553 headlines, which amounts to 244,114 tokens. Out of those 16,553 headlines, 1,109 contain at least one anglicism. The total number of anglicisms is 1,176 (most of them are a single word, although some of them were multiword expressions). The corpus was divided into training, development and test set. The proportions of headlines, tokens and anglicisms in each corpus split can be found in Table TABREF6. The headlines in this corpus come from the Spanish newspaper eldiario.es, a progressive online newspaper based in Spain. eldiario.es is one of the main national newspapers from Spain and, to the best of our knowledge, the only one that publishes its content under a Creative Commons license, which made it ideal for making the corpus publicly available. The headlines were extracted from the newspaper website through web scraping and range from September 2012 to January 2020. Only the following sections were included: economy, technology, lifestyle, music, TV and opinion. These sections were chosen as they were the most likely to contain anglicisms. The proportion of headlines with anglicisms per section can be found in Table TABREF7. Using headlines (instead of full articles) was beneficial for several reasons. First of all, annotating a headline is faster and easier than annotating a full article; this helps ensure that a wider variety of topics will be covered in the corpus. Secondly, anglicisms are abundant in headlines, because they are frequently used as a way of calling the attention of the reader BIBREF43. Finally, borrowings that make it to the headline are likely to be particularly salient or relevant, and therefore are good candidates for being extracted and tracked. Corpus description and annotation ::: Corpus description ::: Supplemental Test Set In addition to the usual train/development/test split we have just presented, a supplemental test set of 5,017 headlines was collected. The headlines included in this additional test set also belong to eldiario.es. These headlines were retrieved daily through RSS during February 2020 and included all sections from the newspaper. The headlines in the supplemental corpus therefore do not overlap in time with the main corpus and include more sections. The number of headlines, tokens and anglicisms in the supplemental test set can be found in Table TABREF6. The motivation behind this supplemental test set is to assess the model performance on more naturalistic data, as the headlines in the supplemental corpus (1) belong to the future of the main corpus and (2) come from a less borrowing-dense sample. This supplemental test set better mimics the real scenario that an actual anglicism extractor would face and can be used to assess how well the model generalizes to detect anglicisms in any section of the daily news, which is ultimately the aim of this project. Corpus description and annotation ::: Annotation guidelines The term anglicism covers a wide range of linguistic phenomena. Following the typology proposed by gomez1997towards, we focused on direct, unadapted, emerging Anglicisms, i.e. lexical borrowings from the English language into Spanish that have recently been imported and that have still not been assimilated into Spanish. Other phenomena such as semantic calques, syntactic anglicisms, acronyms and proper names were considered beyond the scope of this annotation project. Lexical borrowings can be adapted (the spelling of the word is modified to comply with the phonological and orthographic patterns of the recipient language) or unadapted (the word preserves its original spelling). For this annotation task, adapted borrowings were ignored and only unadapted borrowings were annotated. Therefore, Spanish adaptations of anglicisms like fútbol (from football), mitin (from meeting) and such were not annotated as borrowings. Similarly, words derived from foreign lexemes that do not comply with Spanish orthotactics but that have been morphologically derived following the Spanish paradigm (hacktivista, hackear, shakespeariano) were not annotated either. However, pseudo-anglicisms (words that are formed as if they were English, but do not exist in English, such as footing or balconing) were annotated. Words that were not adapted but whose original spelling complies with graphophonological rules of Spanish (and are therefore unlikely to be ever adapted, such as web, internet, fan, club, videoclip) were annotated or not depending on how recent or emergent they were. After all, a word like club, that has been around in Spanish language for centuries, cannot be considered emergent anymore and, for this project, would not be as interesting to retrieve as real emerging anglicisms. The notion of emergent is, however, time-dependent and quite subjective: in order to determine which unadapted, graphophonologically acceptable borrowings were to be annotated, the online version of the Diccionario de la lengua española dle was consulted. This dictionary is compiled by the Royal Spanish Academy, a prescriptive institution on Spanish language. This decision was motivated by the fact that, if a borrowing was already registered by this dictionary (that has conservative approach to language change) and is considered assimilated (that is, the institution recommended no italics or quotation marks to write that word) then it could be inferred that the word was not emergent anymore. Although the previous guidelines covered most cases, they proved insufficient. Some anglicisms were unadapted (they preserved their original spelling), unacceptable according to the Spanish graphophonological rules, and yet did not satisfy the condition of being emergent. That was the case of words like jazz or whisky, words that do not comply with Spanish graphophonological rules but that were imported decades ago, cannot be considered emergent anymore and are unlikely to ever be adapted into the Spanish spelling system. To adjudicate these examples on those cases, the criterion of pragmatic markedness proposed by winter2012proposing (that distinguishes between catachrestic and non-catachrestic borrowing) was applied: if a borrowing was not adapted (i.e. its form remained exactly as it came from English) but referred to a particular invention or innovation that came via the English language, that was not perceived as new anymore and that had never competed with a Spanish equivalent, then it was ignored. This criteria proved to be extremely useful to deal with old unadapted anglicisms in the fields of music and food. Figure 1 summarizes the decision steps followed during the annotation process. The corpus was annotated by a native speaker of Spanish using Doccano doccano. The annotation tagset includes two labels: ENG, to annotate the English borrowings just described, and OTHER. This OTHER tag was used to tag lexical borrowings from languages other than English. After all, although English is today by far the most prevalent donor of borrowings, there are other languages that also provide new borrowings to Spanish. Furthermore, the tag OTHER allows to annotate borrowings such as première or tempeh, borrowings that etymologically do not come from English but that have entered the Spanish language via English influence, even when their spelling is very different to English borrowings. In general, we considered that having such a tag could also help assess how successful a classifier is detecting foreign borrowings in general in Spanish newswire (without having to create a label for every possible donor language, as the number of examples would be too sparse). In total, the training set contained 40 entities labeled as OTHER, the development set contained 14 and the test set contained 13. The supplemental test set contained 35 OTHER entities. Baseline Model A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24. The model was built using pycrfsuite korobov2014python, the Python wrapper for crfsuite CRFsuite that implements CRF for labeling sequential data. It also used the Token and Span utilities from spaCy library honnibal2017spacy. The following handcrafted features were used for the model: Bias feature Token feature Uppercase feature (y/n) Titlecase feature (y/n) Character trigram feature Quotation feature (y/n) Word suffix feature (last three characters) POS tag (provided by spaCy utilities) Word shape (provided by spaCy utilities) Word embedding (see Table TABREF26) Given that anglicisms can be multiword expressions (such as best seller, big data) and that those units should be treated as one borrowing and not as two independent borrowings, we used multi-token BIO encoding to denote the boundaries of each span BIBREF44. A window of two tokens in each direction was set for the feature extractor. The algorithm used was gradient descent with the L-BFGS method. The model was tuned on the development set doing grid search; the hyperparameters considered were c1 (L1 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), c2 (L2 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), embedding scaling ($0.5$, $1.0$, $2.0$, $4.0$), and embedding type bojanowski2017enriching,josecanete20193255001,cardellinoSBWCE,grave2018learning,honnibal2017spacy,perezfasttext,perezglove (see Table TABREF26). The best results were obtained with c1 = $0.05$, c2 = $0.01$, scaling = $0.5$ and word2vec Spanish embeddings by cardellinoSBWCE. The threshold for the stopping criterion delta was selected through observing the loss during preliminary experiments (delta = $1\mathrm {e}-3$). In order to assess the significance of the the handcrafted features, a feature ablation study was done on the tuned model, ablating one feature at a time and testing on the development set. Due to the scarcity of spans labeled with the OTHER tag on the development set (only 14) and given that the main purpose of the model is to detect anglicisms, the baseline model was run ignoring the OTHER tag both during tuning and the feature ablation experiments. Table TABREF27 displays the results on the development set with all features and for the different feature ablation runs. The results show that all features proposed for the baseline model contribute to the results, with the character trigram feature being the one that has the biggest impact on the feature ablation study. Results The baseline model was then run on the test set and the supplemental test set with the set of features and hyperparameters mentioned on Section SECREF5 Table TABREF28 displays the results obtained. The model was run both with and without the OTHER tag. The metrics for ENG display the results obtained only for the spans labeled as anglicisms; the metrics for OTHER display the results obtained for any borrowing other than anglicisms. The metrics for BORROWING discard the type of label and consider correct any labeled span that has correct boundaries, regardless of the label type (so any type of borrowing, regardless if it is ENG or OTHER). In all cases, only full matches were considered correct and no credit was given to partial matching, i.e. if only fake in fake news was retrieved, it was considered wrong and no partial score was given. Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences. Comparing the results with and without the OTHER tag, it seems that including it on the development and test set produces worse results (or they remain roughly the same, at best). However, the best precision result on the supplemental test was obtained when including the OTHER tag and considering both ENG and OTHER spans as BORROWING (precision = 87.62). This is caused by the fact that, while the development and test set were compiled from anglicism-rich newspaper sections (similar to the training set), the supplemental test set contained headlines from all the sections in the newspaper, and therefore included borrowings from other languages such as Catalan, Basque or French. When running the model without the OTHER tag on the supplemental test set, these non-English borrowings were labeled as anglicisms by the model (after all, their spelling does not resemble Spanish spelling), damaging the precision score. When the OTHER tag was included, these non-English borrowings got correctly labeled as OTHER, improving the precision score. This proves that, although the OTHER tag might be irrelevant or even damaging when testing on the development or test set, it can be useful when testing on more naturalistic data, such as the one in the supplemental test set. Concerning errors, two types of errors were recurrent among all sets: long titles of songs, films or series written in English were a source of false positives, as the model tended to mistake some of the uncapitalized words in the title for anglicisms (for example, it darker in “`You want it darker', la oscura y brillante despedida de Leonard Cohen"). On the other hand, anglicisms that appear on the first position of the sentence (and were, therefore, capitalized) were consistently ignored (as the model probably assumed they were named entities) and produced a high number of false negatives (for example, vamping in “Vamping: la recurrente leyenda urbana de la luz azul `asesina'"). The results on Table TABREF28 cannot, however, be compared to the ones reported by previous work: the metric that we report is span F-measure, as the evaluation was done on span level (instead of token level) and credit was only given to full matches. Secondly, there was no Spanish tag assigned to non-borrowings, that means that no credit was given if a Spanish token was identified as such. Future Work This is an on-going project. The corpus we have just presented is a first step towards the development of an extractor of emerging anglicisms in the Spanish press. Future work includes: assessing whether to keep the OTHER tag, improving the baseline model (particularly to improve recall), assessing the suitability and contribution of different sets of features and exploring different models. In terms of the corpus development, the training set is now closed and stable, but the test set could potentially be increased in order to have more and more diverse anglicisms. Conclusions In this paper we have presented a new corpus of 21,570 newspaper headlines written in European Spanish. The corpus is annotated with emergent anglicisms and, up to our very best knowledge, is the first corpus of this type to be released publicly. We have presented the annotation scope, tagset and guidelines, and we have introduced a CRF baseline model for anglicism extraction trained with the described corpus. The results obtained show that the the corpus and baseline model are appropriate for automatic anglicism extraction. Acknowledgements The author would like to thank Constantine Lignos for his feedback and advice on this project. Language Resource References lrec
the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49)
8bfbf78ea7fae0c0b8a510c9a8a49225bbdb5383
8bfbf78ea7fae0c0b8a510c9a8a49225bbdb5383_0
Q: Does the paper motivate the use of CRF as the baseline model? Text: Introduction The study of English influence in the Spanish language has been a hot topic in Hispanic linguistics for decades, particularly concerning lexical borrowing or anglicisms BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Lexical borrowing is a phenomenon that affects all languages and constitutes a productive mechanism for word-formation, especially in the press. chesleypaulapredicting2010 estimated that a reader of French newspapers encountered a new lexical borrowing for every 1,000 words. In Chilean newspapers, lexical borrowings account for approximately 30% of neologisms, 80% of those corresponding to English loanwords BIBREF7. Detecting lexical borrowings is relevant both for lexicographic purposes and for NLP downstream tasks BIBREF8, BIBREF9. However, strategies to track and register lexical borrowings have traditionally relied on manual review of corpora. In this paper we present: (1) a corpus of newspaper headlines in European Spanish annotated with emerging anglicisms and (2) a CRF baseline model for anglicism automatic extraction in Spanish newswire. Related Work Corpus-based studies of English borrowings in Spanish media have traditionally relied on manual evaluation of either previously compiled general corpora such as CREA BIBREF10, BIBREF11, BIBREF12, BIBREF13, either new tailor-made corpora designed to analyze specific genres, varieties or phenomena BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish. Work within the code-switching community has also dealt with language identification on multilingual corpora. Due to the nature of code-switching, these models have primarily focused on oral copora and social media datasets BIBREF22, BIBREF23, BIBREF24. In the last shared task of language identification in code-switched data BIBREF23, approaches to English-Spanish included CRFs models BIBREF25, BIBREF26, BIBREF27, BIBREF28, logistic regression BIBREF29 and LSTMs models BIBREF30, BIBREF31. The scope and nature of lexical borrowing is, however, somewhat different to that of code-switching. In fact, applying code-switching models to lexical borrowing detection has previously proved to be unsuccessful, as they tend to overestimate the number of anglicisms BIBREF32. In the next section we address the differences between both phenomena and set the scope of this project. Anglicism: Scope of the Phenomenon Linguistic borrowing can be defined as the transference of linguistic elements between two languages. Borrowing and code-switching have frequently been described as a continuum BIBREF33, with a fuzzy frontier between the two. As a result, a precise definition of what borrowing is remains elusive BIBREF34 and some authors prefer to talk about code-mixing in general BIBREF35 or “lone other-language incorporations" BIBREF36. Lexical borrowing in particular involves the incorporation of single lexical units from one language into another language and is usually accompanied by morphological and phonological modification to conform with the patterns of the recipient language BIBREF37, BIBREF38. By definition, code-switches are not integrated into a recipient language, unlike established loanwords BIBREF39. While code-switches are usually fluent multiword interferences that normally comply with grammatical restrictions in both languages and that are produced by bilingual speakers in bilingual discourses, lexical borrowings are words used by monolingual individuals that eventually become lexicalized and assimilated as part of the recipient language lexicon until the knowledge of “foreign" origin disappears BIBREF40. In terms of approaching the problem, automatic code-switching identification has been framed as a sequence modeling problem where every token receives a language ID label (as in a POS-tagging task). Borrowing detection, on the other hand, while it can also be transformed into a sequence labeling problem, is an extraction task, where only certain spans of texts will be labeled (in the fashion of a NER task). Various typologies have been proposed that aim to classify borrowings according to different criteria, both with a cross-linguistic perspective and also specifically aimed to characterize English inclusions in Spanish BIBREF34, BIBREF41, BIBREF42, BIBREF5. In this work, we will be focusing on unassimilated lexical borrowings (sometimes called foreignisms), i.e. words from English origin that are introduced into Spanish without any morphological or orthographic adaptation. Corpus description and annotation ::: Corpus description In this subsection we describe the characteristics of the corpus. We first introduce the main corpus, with the usual train/development/test split that was used to train, tune and evaluate the model. We then present an additional test set that was designed to assess the performance of the model on more naturalistic data. Corpus description and annotation ::: Corpus description ::: Main Corpus The main corpus consists of a collection of monolingual newspaper headlines written in European Spanish. The corpus contains 16,553 headlines, which amounts to 244,114 tokens. Out of those 16,553 headlines, 1,109 contain at least one anglicism. The total number of anglicisms is 1,176 (most of them are a single word, although some of them were multiword expressions). The corpus was divided into training, development and test set. The proportions of headlines, tokens and anglicisms in each corpus split can be found in Table TABREF6. The headlines in this corpus come from the Spanish newspaper eldiario.es, a progressive online newspaper based in Spain. eldiario.es is one of the main national newspapers from Spain and, to the best of our knowledge, the only one that publishes its content under a Creative Commons license, which made it ideal for making the corpus publicly available. The headlines were extracted from the newspaper website through web scraping and range from September 2012 to January 2020. Only the following sections were included: economy, technology, lifestyle, music, TV and opinion. These sections were chosen as they were the most likely to contain anglicisms. The proportion of headlines with anglicisms per section can be found in Table TABREF7. Using headlines (instead of full articles) was beneficial for several reasons. First of all, annotating a headline is faster and easier than annotating a full article; this helps ensure that a wider variety of topics will be covered in the corpus. Secondly, anglicisms are abundant in headlines, because they are frequently used as a way of calling the attention of the reader BIBREF43. Finally, borrowings that make it to the headline are likely to be particularly salient or relevant, and therefore are good candidates for being extracted and tracked. Corpus description and annotation ::: Corpus description ::: Supplemental Test Set In addition to the usual train/development/test split we have just presented, a supplemental test set of 5,017 headlines was collected. The headlines included in this additional test set also belong to eldiario.es. These headlines were retrieved daily through RSS during February 2020 and included all sections from the newspaper. The headlines in the supplemental corpus therefore do not overlap in time with the main corpus and include more sections. The number of headlines, tokens and anglicisms in the supplemental test set can be found in Table TABREF6. The motivation behind this supplemental test set is to assess the model performance on more naturalistic data, as the headlines in the supplemental corpus (1) belong to the future of the main corpus and (2) come from a less borrowing-dense sample. This supplemental test set better mimics the real scenario that an actual anglicism extractor would face and can be used to assess how well the model generalizes to detect anglicisms in any section of the daily news, which is ultimately the aim of this project. Corpus description and annotation ::: Annotation guidelines The term anglicism covers a wide range of linguistic phenomena. Following the typology proposed by gomez1997towards, we focused on direct, unadapted, emerging Anglicisms, i.e. lexical borrowings from the English language into Spanish that have recently been imported and that have still not been assimilated into Spanish. Other phenomena such as semantic calques, syntactic anglicisms, acronyms and proper names were considered beyond the scope of this annotation project. Lexical borrowings can be adapted (the spelling of the word is modified to comply with the phonological and orthographic patterns of the recipient language) or unadapted (the word preserves its original spelling). For this annotation task, adapted borrowings were ignored and only unadapted borrowings were annotated. Therefore, Spanish adaptations of anglicisms like fútbol (from football), mitin (from meeting) and such were not annotated as borrowings. Similarly, words derived from foreign lexemes that do not comply with Spanish orthotactics but that have been morphologically derived following the Spanish paradigm (hacktivista, hackear, shakespeariano) were not annotated either. However, pseudo-anglicisms (words that are formed as if they were English, but do not exist in English, such as footing or balconing) were annotated. Words that were not adapted but whose original spelling complies with graphophonological rules of Spanish (and are therefore unlikely to be ever adapted, such as web, internet, fan, club, videoclip) were annotated or not depending on how recent or emergent they were. After all, a word like club, that has been around in Spanish language for centuries, cannot be considered emergent anymore and, for this project, would not be as interesting to retrieve as real emerging anglicisms. The notion of emergent is, however, time-dependent and quite subjective: in order to determine which unadapted, graphophonologically acceptable borrowings were to be annotated, the online version of the Diccionario de la lengua española dle was consulted. This dictionary is compiled by the Royal Spanish Academy, a prescriptive institution on Spanish language. This decision was motivated by the fact that, if a borrowing was already registered by this dictionary (that has conservative approach to language change) and is considered assimilated (that is, the institution recommended no italics or quotation marks to write that word) then it could be inferred that the word was not emergent anymore. Although the previous guidelines covered most cases, they proved insufficient. Some anglicisms were unadapted (they preserved their original spelling), unacceptable according to the Spanish graphophonological rules, and yet did not satisfy the condition of being emergent. That was the case of words like jazz or whisky, words that do not comply with Spanish graphophonological rules but that were imported decades ago, cannot be considered emergent anymore and are unlikely to ever be adapted into the Spanish spelling system. To adjudicate these examples on those cases, the criterion of pragmatic markedness proposed by winter2012proposing (that distinguishes between catachrestic and non-catachrestic borrowing) was applied: if a borrowing was not adapted (i.e. its form remained exactly as it came from English) but referred to a particular invention or innovation that came via the English language, that was not perceived as new anymore and that had never competed with a Spanish equivalent, then it was ignored. This criteria proved to be extremely useful to deal with old unadapted anglicisms in the fields of music and food. Figure 1 summarizes the decision steps followed during the annotation process. The corpus was annotated by a native speaker of Spanish using Doccano doccano. The annotation tagset includes two labels: ENG, to annotate the English borrowings just described, and OTHER. This OTHER tag was used to tag lexical borrowings from languages other than English. After all, although English is today by far the most prevalent donor of borrowings, there are other languages that also provide new borrowings to Spanish. Furthermore, the tag OTHER allows to annotate borrowings such as première or tempeh, borrowings that etymologically do not come from English but that have entered the Spanish language via English influence, even when their spelling is very different to English borrowings. In general, we considered that having such a tag could also help assess how successful a classifier is detecting foreign borrowings in general in Spanish newswire (without having to create a label for every possible donor language, as the number of examples would be too sparse). In total, the training set contained 40 entities labeled as OTHER, the development set contained 14 and the test set contained 13. The supplemental test set contained 35 OTHER entities. Baseline Model A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24. The model was built using pycrfsuite korobov2014python, the Python wrapper for crfsuite CRFsuite that implements CRF for labeling sequential data. It also used the Token and Span utilities from spaCy library honnibal2017spacy. The following handcrafted features were used for the model: Bias feature Token feature Uppercase feature (y/n) Titlecase feature (y/n) Character trigram feature Quotation feature (y/n) Word suffix feature (last three characters) POS tag (provided by spaCy utilities) Word shape (provided by spaCy utilities) Word embedding (see Table TABREF26) Given that anglicisms can be multiword expressions (such as best seller, big data) and that those units should be treated as one borrowing and not as two independent borrowings, we used multi-token BIO encoding to denote the boundaries of each span BIBREF44. A window of two tokens in each direction was set for the feature extractor. The algorithm used was gradient descent with the L-BFGS method. The model was tuned on the development set doing grid search; the hyperparameters considered were c1 (L1 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), c2 (L2 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), embedding scaling ($0.5$, $1.0$, $2.0$, $4.0$), and embedding type bojanowski2017enriching,josecanete20193255001,cardellinoSBWCE,grave2018learning,honnibal2017spacy,perezfasttext,perezglove (see Table TABREF26). The best results were obtained with c1 = $0.05$, c2 = $0.01$, scaling = $0.5$ and word2vec Spanish embeddings by cardellinoSBWCE. The threshold for the stopping criterion delta was selected through observing the loss during preliminary experiments (delta = $1\mathrm {e}-3$). In order to assess the significance of the the handcrafted features, a feature ablation study was done on the tuned model, ablating one feature at a time and testing on the development set. Due to the scarcity of spans labeled with the OTHER tag on the development set (only 14) and given that the main purpose of the model is to detect anglicisms, the baseline model was run ignoring the OTHER tag both during tuning and the feature ablation experiments. Table TABREF27 displays the results on the development set with all features and for the different feature ablation runs. The results show that all features proposed for the baseline model contribute to the results, with the character trigram feature being the one that has the biggest impact on the feature ablation study. Results The baseline model was then run on the test set and the supplemental test set with the set of features and hyperparameters mentioned on Section SECREF5 Table TABREF28 displays the results obtained. The model was run both with and without the OTHER tag. The metrics for ENG display the results obtained only for the spans labeled as anglicisms; the metrics for OTHER display the results obtained for any borrowing other than anglicisms. The metrics for BORROWING discard the type of label and consider correct any labeled span that has correct boundaries, regardless of the label type (so any type of borrowing, regardless if it is ENG or OTHER). In all cases, only full matches were considered correct and no credit was given to partial matching, i.e. if only fake in fake news was retrieved, it was considered wrong and no partial score was given. Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences. Comparing the results with and without the OTHER tag, it seems that including it on the development and test set produces worse results (or they remain roughly the same, at best). However, the best precision result on the supplemental test was obtained when including the OTHER tag and considering both ENG and OTHER spans as BORROWING (precision = 87.62). This is caused by the fact that, while the development and test set were compiled from anglicism-rich newspaper sections (similar to the training set), the supplemental test set contained headlines from all the sections in the newspaper, and therefore included borrowings from other languages such as Catalan, Basque or French. When running the model without the OTHER tag on the supplemental test set, these non-English borrowings were labeled as anglicisms by the model (after all, their spelling does not resemble Spanish spelling), damaging the precision score. When the OTHER tag was included, these non-English borrowings got correctly labeled as OTHER, improving the precision score. This proves that, although the OTHER tag might be irrelevant or even damaging when testing on the development or test set, it can be useful when testing on more naturalistic data, such as the one in the supplemental test set. Concerning errors, two types of errors were recurrent among all sets: long titles of songs, films or series written in English were a source of false positives, as the model tended to mistake some of the uncapitalized words in the title for anglicisms (for example, it darker in “`You want it darker', la oscura y brillante despedida de Leonard Cohen"). On the other hand, anglicisms that appear on the first position of the sentence (and were, therefore, capitalized) were consistently ignored (as the model probably assumed they were named entities) and produced a high number of false negatives (for example, vamping in “Vamping: la recurrente leyenda urbana de la luz azul `asesina'"). The results on Table TABREF28 cannot, however, be compared to the ones reported by previous work: the metric that we report is span F-measure, as the evaluation was done on span level (instead of token level) and credit was only given to full matches. Secondly, there was no Spanish tag assigned to non-borrowings, that means that no credit was given if a Spanish token was identified as such. Future Work This is an on-going project. The corpus we have just presented is a first step towards the development of an extractor of emerging anglicisms in the Spanish press. Future work includes: assessing whether to keep the OTHER tag, improving the baseline model (particularly to improve recall), assessing the suitability and contribution of different sets of features and exploring different models. In terms of the corpus development, the training set is now closed and stable, but the test set could potentially be increased in order to have more and more diverse anglicisms. Conclusions In this paper we have presented a new corpus of 21,570 newspaper headlines written in European Spanish. The corpus is annotated with emergent anglicisms and, up to our very best knowledge, is the first corpus of this type to be released publicly. We have presented the annotation scope, tagset and guidelines, and we have introduced a CRF baseline model for anglicism extraction trained with the described corpus. The results obtained show that the the corpus and baseline model are appropriate for automatic anglicism extraction. Acknowledgements The author would like to thank Constantine Lignos for his feedback and advice on this project. Language Resource References lrec
the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data
97757a69d9fc28b260e68284fd300726fbe358d0
97757a69d9fc28b260e68284fd300726fbe358d0_0
Q: What are the handcrafted features used? Text: Introduction The study of English influence in the Spanish language has been a hot topic in Hispanic linguistics for decades, particularly concerning lexical borrowing or anglicisms BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Lexical borrowing is a phenomenon that affects all languages and constitutes a productive mechanism for word-formation, especially in the press. chesleypaulapredicting2010 estimated that a reader of French newspapers encountered a new lexical borrowing for every 1,000 words. In Chilean newspapers, lexical borrowings account for approximately 30% of neologisms, 80% of those corresponding to English loanwords BIBREF7. Detecting lexical borrowings is relevant both for lexicographic purposes and for NLP downstream tasks BIBREF8, BIBREF9. However, strategies to track and register lexical borrowings have traditionally relied on manual review of corpora. In this paper we present: (1) a corpus of newspaper headlines in European Spanish annotated with emerging anglicisms and (2) a CRF baseline model for anglicism automatic extraction in Spanish newswire. Related Work Corpus-based studies of English borrowings in Spanish media have traditionally relied on manual evaluation of either previously compiled general corpora such as CREA BIBREF10, BIBREF11, BIBREF12, BIBREF13, either new tailor-made corpora designed to analyze specific genres, varieties or phenomena BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish. Work within the code-switching community has also dealt with language identification on multilingual corpora. Due to the nature of code-switching, these models have primarily focused on oral copora and social media datasets BIBREF22, BIBREF23, BIBREF24. In the last shared task of language identification in code-switched data BIBREF23, approaches to English-Spanish included CRFs models BIBREF25, BIBREF26, BIBREF27, BIBREF28, logistic regression BIBREF29 and LSTMs models BIBREF30, BIBREF31. The scope and nature of lexical borrowing is, however, somewhat different to that of code-switching. In fact, applying code-switching models to lexical borrowing detection has previously proved to be unsuccessful, as they tend to overestimate the number of anglicisms BIBREF32. In the next section we address the differences between both phenomena and set the scope of this project. Anglicism: Scope of the Phenomenon Linguistic borrowing can be defined as the transference of linguistic elements between two languages. Borrowing and code-switching have frequently been described as a continuum BIBREF33, with a fuzzy frontier between the two. As a result, a precise definition of what borrowing is remains elusive BIBREF34 and some authors prefer to talk about code-mixing in general BIBREF35 or “lone other-language incorporations" BIBREF36. Lexical borrowing in particular involves the incorporation of single lexical units from one language into another language and is usually accompanied by morphological and phonological modification to conform with the patterns of the recipient language BIBREF37, BIBREF38. By definition, code-switches are not integrated into a recipient language, unlike established loanwords BIBREF39. While code-switches are usually fluent multiword interferences that normally comply with grammatical restrictions in both languages and that are produced by bilingual speakers in bilingual discourses, lexical borrowings are words used by monolingual individuals that eventually become lexicalized and assimilated as part of the recipient language lexicon until the knowledge of “foreign" origin disappears BIBREF40. In terms of approaching the problem, automatic code-switching identification has been framed as a sequence modeling problem where every token receives a language ID label (as in a POS-tagging task). Borrowing detection, on the other hand, while it can also be transformed into a sequence labeling problem, is an extraction task, where only certain spans of texts will be labeled (in the fashion of a NER task). Various typologies have been proposed that aim to classify borrowings according to different criteria, both with a cross-linguistic perspective and also specifically aimed to characterize English inclusions in Spanish BIBREF34, BIBREF41, BIBREF42, BIBREF5. In this work, we will be focusing on unassimilated lexical borrowings (sometimes called foreignisms), i.e. words from English origin that are introduced into Spanish without any morphological or orthographic adaptation. Corpus description and annotation ::: Corpus description In this subsection we describe the characteristics of the corpus. We first introduce the main corpus, with the usual train/development/test split that was used to train, tune and evaluate the model. We then present an additional test set that was designed to assess the performance of the model on more naturalistic data. Corpus description and annotation ::: Corpus description ::: Main Corpus The main corpus consists of a collection of monolingual newspaper headlines written in European Spanish. The corpus contains 16,553 headlines, which amounts to 244,114 tokens. Out of those 16,553 headlines, 1,109 contain at least one anglicism. The total number of anglicisms is 1,176 (most of them are a single word, although some of them were multiword expressions). The corpus was divided into training, development and test set. The proportions of headlines, tokens and anglicisms in each corpus split can be found in Table TABREF6. The headlines in this corpus come from the Spanish newspaper eldiario.es, a progressive online newspaper based in Spain. eldiario.es is one of the main national newspapers from Spain and, to the best of our knowledge, the only one that publishes its content under a Creative Commons license, which made it ideal for making the corpus publicly available. The headlines were extracted from the newspaper website through web scraping and range from September 2012 to January 2020. Only the following sections were included: economy, technology, lifestyle, music, TV and opinion. These sections were chosen as they were the most likely to contain anglicisms. The proportion of headlines with anglicisms per section can be found in Table TABREF7. Using headlines (instead of full articles) was beneficial for several reasons. First of all, annotating a headline is faster and easier than annotating a full article; this helps ensure that a wider variety of topics will be covered in the corpus. Secondly, anglicisms are abundant in headlines, because they are frequently used as a way of calling the attention of the reader BIBREF43. Finally, borrowings that make it to the headline are likely to be particularly salient or relevant, and therefore are good candidates for being extracted and tracked. Corpus description and annotation ::: Corpus description ::: Supplemental Test Set In addition to the usual train/development/test split we have just presented, a supplemental test set of 5,017 headlines was collected. The headlines included in this additional test set also belong to eldiario.es. These headlines were retrieved daily through RSS during February 2020 and included all sections from the newspaper. The headlines in the supplemental corpus therefore do not overlap in time with the main corpus and include more sections. The number of headlines, tokens and anglicisms in the supplemental test set can be found in Table TABREF6. The motivation behind this supplemental test set is to assess the model performance on more naturalistic data, as the headlines in the supplemental corpus (1) belong to the future of the main corpus and (2) come from a less borrowing-dense sample. This supplemental test set better mimics the real scenario that an actual anglicism extractor would face and can be used to assess how well the model generalizes to detect anglicisms in any section of the daily news, which is ultimately the aim of this project. Corpus description and annotation ::: Annotation guidelines The term anglicism covers a wide range of linguistic phenomena. Following the typology proposed by gomez1997towards, we focused on direct, unadapted, emerging Anglicisms, i.e. lexical borrowings from the English language into Spanish that have recently been imported and that have still not been assimilated into Spanish. Other phenomena such as semantic calques, syntactic anglicisms, acronyms and proper names were considered beyond the scope of this annotation project. Lexical borrowings can be adapted (the spelling of the word is modified to comply with the phonological and orthographic patterns of the recipient language) or unadapted (the word preserves its original spelling). For this annotation task, adapted borrowings were ignored and only unadapted borrowings were annotated. Therefore, Spanish adaptations of anglicisms like fútbol (from football), mitin (from meeting) and such were not annotated as borrowings. Similarly, words derived from foreign lexemes that do not comply with Spanish orthotactics but that have been morphologically derived following the Spanish paradigm (hacktivista, hackear, shakespeariano) were not annotated either. However, pseudo-anglicisms (words that are formed as if they were English, but do not exist in English, such as footing or balconing) were annotated. Words that were not adapted but whose original spelling complies with graphophonological rules of Spanish (and are therefore unlikely to be ever adapted, such as web, internet, fan, club, videoclip) were annotated or not depending on how recent or emergent they were. After all, a word like club, that has been around in Spanish language for centuries, cannot be considered emergent anymore and, for this project, would not be as interesting to retrieve as real emerging anglicisms. The notion of emergent is, however, time-dependent and quite subjective: in order to determine which unadapted, graphophonologically acceptable borrowings were to be annotated, the online version of the Diccionario de la lengua española dle was consulted. This dictionary is compiled by the Royal Spanish Academy, a prescriptive institution on Spanish language. This decision was motivated by the fact that, if a borrowing was already registered by this dictionary (that has conservative approach to language change) and is considered assimilated (that is, the institution recommended no italics or quotation marks to write that word) then it could be inferred that the word was not emergent anymore. Although the previous guidelines covered most cases, they proved insufficient. Some anglicisms were unadapted (they preserved their original spelling), unacceptable according to the Spanish graphophonological rules, and yet did not satisfy the condition of being emergent. That was the case of words like jazz or whisky, words that do not comply with Spanish graphophonological rules but that were imported decades ago, cannot be considered emergent anymore and are unlikely to ever be adapted into the Spanish spelling system. To adjudicate these examples on those cases, the criterion of pragmatic markedness proposed by winter2012proposing (that distinguishes between catachrestic and non-catachrestic borrowing) was applied: if a borrowing was not adapted (i.e. its form remained exactly as it came from English) but referred to a particular invention or innovation that came via the English language, that was not perceived as new anymore and that had never competed with a Spanish equivalent, then it was ignored. This criteria proved to be extremely useful to deal with old unadapted anglicisms in the fields of music and food. Figure 1 summarizes the decision steps followed during the annotation process. The corpus was annotated by a native speaker of Spanish using Doccano doccano. The annotation tagset includes two labels: ENG, to annotate the English borrowings just described, and OTHER. This OTHER tag was used to tag lexical borrowings from languages other than English. After all, although English is today by far the most prevalent donor of borrowings, there are other languages that also provide new borrowings to Spanish. Furthermore, the tag OTHER allows to annotate borrowings such as première or tempeh, borrowings that etymologically do not come from English but that have entered the Spanish language via English influence, even when their spelling is very different to English borrowings. In general, we considered that having such a tag could also help assess how successful a classifier is detecting foreign borrowings in general in Spanish newswire (without having to create a label for every possible donor language, as the number of examples would be too sparse). In total, the training set contained 40 entities labeled as OTHER, the development set contained 14 and the test set contained 13. The supplemental test set contained 35 OTHER entities. Baseline Model A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24. The model was built using pycrfsuite korobov2014python, the Python wrapper for crfsuite CRFsuite that implements CRF for labeling sequential data. It also used the Token and Span utilities from spaCy library honnibal2017spacy. The following handcrafted features were used for the model: Bias feature Token feature Uppercase feature (y/n) Titlecase feature (y/n) Character trigram feature Quotation feature (y/n) Word suffix feature (last three characters) POS tag (provided by spaCy utilities) Word shape (provided by spaCy utilities) Word embedding (see Table TABREF26) Given that anglicisms can be multiword expressions (such as best seller, big data) and that those units should be treated as one borrowing and not as two independent borrowings, we used multi-token BIO encoding to denote the boundaries of each span BIBREF44. A window of two tokens in each direction was set for the feature extractor. The algorithm used was gradient descent with the L-BFGS method. The model was tuned on the development set doing grid search; the hyperparameters considered were c1 (L1 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), c2 (L2 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), embedding scaling ($0.5$, $1.0$, $2.0$, $4.0$), and embedding type bojanowski2017enriching,josecanete20193255001,cardellinoSBWCE,grave2018learning,honnibal2017spacy,perezfasttext,perezglove (see Table TABREF26). The best results were obtained with c1 = $0.05$, c2 = $0.01$, scaling = $0.5$ and word2vec Spanish embeddings by cardellinoSBWCE. The threshold for the stopping criterion delta was selected through observing the loss during preliminary experiments (delta = $1\mathrm {e}-3$). In order to assess the significance of the the handcrafted features, a feature ablation study was done on the tuned model, ablating one feature at a time and testing on the development set. Due to the scarcity of spans labeled with the OTHER tag on the development set (only 14) and given that the main purpose of the model is to detect anglicisms, the baseline model was run ignoring the OTHER tag both during tuning and the feature ablation experiments. Table TABREF27 displays the results on the development set with all features and for the different feature ablation runs. The results show that all features proposed for the baseline model contribute to the results, with the character trigram feature being the one that has the biggest impact on the feature ablation study. Results The baseline model was then run on the test set and the supplemental test set with the set of features and hyperparameters mentioned on Section SECREF5 Table TABREF28 displays the results obtained. The model was run both with and without the OTHER tag. The metrics for ENG display the results obtained only for the spans labeled as anglicisms; the metrics for OTHER display the results obtained for any borrowing other than anglicisms. The metrics for BORROWING discard the type of label and consider correct any labeled span that has correct boundaries, regardless of the label type (so any type of borrowing, regardless if it is ENG or OTHER). In all cases, only full matches were considered correct and no credit was given to partial matching, i.e. if only fake in fake news was retrieved, it was considered wrong and no partial score was given. Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences. Comparing the results with and without the OTHER tag, it seems that including it on the development and test set produces worse results (or they remain roughly the same, at best). However, the best precision result on the supplemental test was obtained when including the OTHER tag and considering both ENG and OTHER spans as BORROWING (precision = 87.62). This is caused by the fact that, while the development and test set were compiled from anglicism-rich newspaper sections (similar to the training set), the supplemental test set contained headlines from all the sections in the newspaper, and therefore included borrowings from other languages such as Catalan, Basque or French. When running the model without the OTHER tag on the supplemental test set, these non-English borrowings were labeled as anglicisms by the model (after all, their spelling does not resemble Spanish spelling), damaging the precision score. When the OTHER tag was included, these non-English borrowings got correctly labeled as OTHER, improving the precision score. This proves that, although the OTHER tag might be irrelevant or even damaging when testing on the development or test set, it can be useful when testing on more naturalistic data, such as the one in the supplemental test set. Concerning errors, two types of errors were recurrent among all sets: long titles of songs, films or series written in English were a source of false positives, as the model tended to mistake some of the uncapitalized words in the title for anglicisms (for example, it darker in “`You want it darker', la oscura y brillante despedida de Leonard Cohen"). On the other hand, anglicisms that appear on the first position of the sentence (and were, therefore, capitalized) were consistently ignored (as the model probably assumed they were named entities) and produced a high number of false negatives (for example, vamping in “Vamping: la recurrente leyenda urbana de la luz azul `asesina'"). The results on Table TABREF28 cannot, however, be compared to the ones reported by previous work: the metric that we report is span F-measure, as the evaluation was done on span level (instead of token level) and credit was only given to full matches. Secondly, there was no Spanish tag assigned to non-borrowings, that means that no credit was given if a Spanish token was identified as such. Future Work This is an on-going project. The corpus we have just presented is a first step towards the development of an extractor of emerging anglicisms in the Spanish press. Future work includes: assessing whether to keep the OTHER tag, improving the baseline model (particularly to improve recall), assessing the suitability and contribution of different sets of features and exploring different models. In terms of the corpus development, the training set is now closed and stable, but the test set could potentially be increased in order to have more and more diverse anglicisms. Conclusions In this paper we have presented a new corpus of 21,570 newspaper headlines written in European Spanish. The corpus is annotated with emergent anglicisms and, up to our very best knowledge, is the first corpus of this type to be released publicly. We have presented the annotation scope, tagset and guidelines, and we have introduced a CRF baseline model for anglicism extraction trained with the described corpus. The results obtained show that the the corpus and baseline model are appropriate for automatic anglicism extraction. Acknowledgements The author would like to thank Constantine Lignos for his feedback and advice on this project. Language Resource References lrec
Bias feature, Token feature, Uppercase feature (y/n), Titlecase feature (y/n), Character trigram feature, Quotation feature (y/n), Word suffix feature (last three characters), POS tag (provided by spaCy utilities), Word shape (provided by spaCy utilities), Word embedding (see Table TABREF26)
41830ebb8369a24d490e504b7cdeeeaa9b09fd9c
41830ebb8369a24d490e504b7cdeeeaa9b09fd9c_0
Q: What is state of the art method? Text: Introduction Deep generative models attract a lot of attention in recent years BIBREF0. Such methods as variational autoencoders BIBREF1 or generative adversarial networks BIBREF2 are successfully applied to a variety of machine vision problems including image generation BIBREF3, learning interpretable image representations BIBREF4 and style transfer for images BIBREF5. However, natural language generation is more challenging due to many reasons, such as the discrete nature of textual information BIBREF6, the absence of local information continuity and non-smooth disentangled representations BIBREF7. Due to these difficulties, text generation is mostly limited to specific narrow applications and is usually working in supervised settings. Content and style are deeply fused in natural language, but style transfer for texts is often addressed in the context of disentangled latent representations BIBREF6, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. Intuitive understanding of this problem is apparent: if an input text has some attribute $A$, a system generates new text similar to the input on a given set of attributes with only one attribute $A$ changed to the target attribute $\tilde{A}$. In the majority of previous works, style transfer is obtained through an encoder-decoder architecture with one or multiple style discriminators to learn disentangled representations. The encoder takes a sentence as an input and generates a style-independent content representation. The decoder then takes the content representation and the target style representation to generate the transformed sentence. In BIBREF13 authors question the quality and usability of the disentangled representations for texts and suggest an end-to-end approach to style transfer similar to an end-to-end machine translation. Contribution of this paper is three-fold: 1) we show that different style transfer architectures have varying results on test and that reporting error margins for various training re-runs of the same model is especially important for adequate assessment of the models accuracy, see Figure FIGREF1; 2) we show that BLEU BIBREF14 between input and output and accuracy of style transfer measured in terms of the accuracy of a pre-trained external style classifier can be manipulated and naturally diverge from the intuitive goal of the style transfer task starting from a certain threshold; 3) new architectures that perform style transfer using improved latent representations are shown to outperform state of the art in terms of BLEU between output and human-written reformulations. Related Work Style of a text is a very general notion that is hard to define in rigorous terms BIBREF15. However, the style of a text can be characterized quantitatively BIBREF16; stylized texts could be generated if a system is trained on a dataset of stylistically similar texts BIBREF17; and author-style could be learned end-to-end BIBREF18, BIBREF19, BIBREF20. A majority of recent works on style transfer focus on the sentiment of text and use it as a target attribute. For example, in BIBREF21, BIBREF22, BIBREF23 estimate the quality of the style transfer with binary sentiment classifier trained on the corpora further used for the training of the style-transfer system. BIBREF24 and especially BIBREF9 generalize this ad-hoc approach defining a style as a set of arbitrary quantitively measurable categorial or continuous parameters. Such parameters could include the 'style of the time' BIBREF16, author-specific attributes (see BIBREF25 or BIBREF26 on 'shakespearization'), politeness BIBREF27, formality of speech BIBREF28, and gender or even political slant BIBREF29. A significant challenge associated with narrowly defined style-transfer problems is that finding a good solution for one aspect of a style does not guarantee that you can use the same solution for a different aspect of it. For example, BIBREF30 build a generative model for sentiment transfer with a retrieve-edit approach. In BIBREF21 a delete-retrieve model shows good results for sentiment transfer. However, it is hard to imagine that these retrieval approaches could be used, say, for the style of the time or formality, since in these cases the system is often expected to paraphrase a given sentence to achieve the target style. In BIBREF6 the authors propose a more general approach to the controlled text generation combining variational autoencoder (VAE) with an extended wake-sleep mechanism in which the sleep procedure updates both the generator and external classifier that assesses generated samples and feedbacks learning signals to the generator. Authors had concatenated labels for style with the text representation of the encoder and used this vector with "hard-coded" information about the sentiment of the output as the input of the decoder. This approach seems promising, and some other papers either extend it or use similar ideas. BIBREF8 applied a GAN to align the hidden representations of sentences from two corpora using an adversarial loss to decompose information about the form. In BIBREF31 model learns a smooth code space and can be used as a discrete GAN with the ability to generate coherent discrete outputs from continuous samples. Authors use two different generators for two different styles. In BIBREF9 an adversarial network is used to make sure that the output of the encoder does not have style representation. BIBREF6 also uses an adversarial component that ensures there is no stylistic information within the representation. BIBREF9 do not use a dedicated component that controls the semantic component of the latent representation. Such a component is proposed by BIBREF10 who demonstrate that decomposition of style and content could be improved with an auxiliary multi-task for label prediction and adversarial objective for bag-of-words prediction. BIBREF11 also introduces a dedicated component to control semantic aspects of latent representations and an adversarial-motivational training that includes a special motivational loss to encourage a better decomposition. Speaking about preservation of semantics one also has to mention works on paraphrase systems, see, for example BIBREF32, BIBREF33, BIBREF34. The methodology described in this paper could be extended to paraphrasing systems in terms of semantic preservation measurement, however, this is the matter of future work. BIBREF13 state that learning a latent representation, which is independent of the attributes specifying its style, is rarely attainable. There are other works on style transfer that are based on the ideas of neural machine translation with BIBREF35 and without parallel corpora BIBREF36 in line with BIBREF37 and BIBREF38. It is important to underline here that majority of the papers dedicated to style transfer for texts treat sentiment of a sentence as a stylistic rather than semantic attribute despite particular concerns BIBREF39. It is also crucial to mention that in line with BIBREF9 majority of the state of the art methods for style transfer use an external pre-trained classifier to measure the accuracy of the style transfer. BLEU computes the harmonic mean of precision of exact matching n-grams between a reference and a target sentence across the corpus. It is not sensitive to minute changes, but BLEU between input and output is often used as the coarse measure of the semantics preservation. For the corpora that have human written reformulations, BLEU between the output of the model and human text is used. These metrics are used alongside with a handful of others such as PINC (Paraphrase In N-gram Changes) score BIBREF35, POS distance BIBREF12, language fluency BIBREF10, etc. Figure FIGREF2 shows self-reported results of different models in terms of two most frequently measured performance metrics, namely, BLEU and Accuracy of the style transfer. This paper focuses on Yelp! reviews dataset that was lately enhanced with human written reformulations by BIBREF21. These are Yelp! reviews, where each short English review of a place is labeled as a negative or as a positive once. This paper studies three metrics that are most common in the field at the moment and questions to which extent can they be used for the performance assessment. These metrics are the accuracy of an external style classifier that is trained to measure the accuracy of the style transfer, BLEU between input and output of a system, and BLEU between output and human-written texts. Style transfer In this work we experiment with extensions of a model, described in BIBREF6, using Texar BIBREF40 framework. To generate plausible sentences with specific semantic and stylistic features every sentence is conditioned on a representation vector $z$ which is concatenated with a particular code $c$ that specifies desired attribute, see Figure FIGREF8. Under notation introduced in BIBREF6 the base autoencoder (AE) includes a conditional probabilistic encoder $E$ defined with parameters $\theta _E$ to infer the latent representation $z$ given input $x$ Generator $G$ defined with parameters $\theta _G$ is a GRU-RNN for generating and output $\hat{x}$ defined as a sequence of tokens $\hat{x} = {\hat{x}_1, ..., \hat{x}_T}$ conditioned on the latent representation $z$ and a stylistic component $c$ that are concatenated and give rise to a generative distribution These encoder and generator form an AE with the following loss This standard reconstruction loss that drives the generator to produce realistic sentences is combined with two additional losses. The first discriminator provides extra learning signals which enforce the generator to produce coherent attributes that match the structured code in $c$. Since it is impossible to propagate gradients from the discriminator through the discrete sample $\hat{x}$, we use a deterministic continuous approximation a "soft" generated sentence, denoted as $\tilde{G} = \tilde{G}_\tau (z, c)$ with "temperature" $\tau $ set to $\tau \rightarrow 0$ as training proceeds. The resulting “soft” generated sentence is fed into the discriminator to measure the fitness to the target attribute, leading to the following loss Finally, under the assumption that each structured attribute of generated sentences is controlled through the corresponding code in $c$ and is independent from $z$ one would like to control that other not explicitly modelled attributes do not entangle with $c$. This is addressed by the dedicated loss The training objective for the baseline, shown in Figure FIGREF8, is therefore a sum of the losses from Equations (DISPLAY_FORM4) – (DISPLAY_FORM6) defined as where $\lambda _c$ and $\lambda _z$ are balancing parameters. Let us propose two further extensions of this baseline architecture. To improve reproducibility of the research the code of the studied models is open. Both extensions aim to improve the quality of information decomposition within the latent representation. In the first one, shown in Figure FIGREF12, a special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information. The loss of this discriminator is defined as Here a discriminator denoted as $D_z$ is trying to predict code $c$ using representation $z$. Combining the loss defined by Equation (DISPLAY_FORM7) with the adversarial component defined in Equation (DISPLAY_FORM10) the following learning objective is formed where $\mathcal {L}_{baseline}$ is a sum defined in Equation (DISPLAY_FORM7), $\lambda _{D_z}$ is a balancing parameter. The second extension of the baseline architecture does not use an adversarial component $D_z$ that is trying to eradicate information on $c$ from component $z$. Instead, the system, shown in Figure FIGREF16 feeds the "soft" generated sentence $\tilde{G}$ into encoder $E$ and checks how close is the representation $E(\tilde{G} )$ to the original representation $z = E(x)$ in terms of the cosine distance. We further refer to it as shifted autoencoder or SAE. Ideally, both $E(\tilde{G} (E(x), c))$ and $E(\tilde{G} (E(x), \bar{c}))$, where $\bar{c}$ denotes an inverse style code, should be both equal to $E(x)$. The loss of the shifted autoencoder is where $\lambda _{cos}$ and $\lambda _{cos^{-}}$ are two balancing parameters, with two additional terms in the loss, namely, cosine distances between the softened output processed by the encoder and the encoded original input, defined as We also study a combination of both approaches described above, shown on Figure FIGREF17. In Section SECREF4 we describe a series of experiments that we have carried out for these architectures using Yelp! reviews dataset. Experiments We have found that the baseline, as well as the proposed extensions, have noisy outcomes, when retrained from scratch, see Figure FIGREF1. Most of the papers mentioned in Section SECREF2 measure the performance of the methods proposed for the sentiment transfer with two metrics: accuracy of the external sentiment classifier measured on test data, and BLEU between the input and output that is regarded as a coarse metric for semantic similarity. In the first part of this section, we demonstrate that reporting error margins is essential for the performance assessment in terms that are prevalent in the field at the moment, i.e., BLEU between input and output and accuracy of the external sentiment classifier. In the second part, we also show that both of these two metrics after a certain threshold start to diverge from an intuitive goal of the style transfer and could be manipulated. Experiments ::: Error margins matter On Figure FIGREF1 one can see that the outcomes for every single rerun differ significantly. Namely, accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points. This variance can be partially explained with the stochasticity incurred due to sampling from the latent variables. However, we show that results for state of the art models sometimes end up within error margins from one another, so one has to report the margins to compare the results rigorously. More importantly, one can see that there is an inherent trade-off between these two performance metrics. This trade-off is not only visible across models but is also present for the same retrained architecture. Therefore, improving one of the two metrics is not enough to confidently state that one system solves the style-transfer problem better than the other. One has to report error margins after several consecutive retrains and instead of comparing one of the two metrics has to talk about Pareto-like optimization that would show confident improvement of both. To put obtained results into perspective, we have retrained every model from scratch five times in a row. We have also retrained the models of BIBREF12 five times since their code is published online. Figure FIGREF19 shows the results of all models with error margins. It is also enhanced with other self-reported results on the same Yelp! review dataset for which no code was published. One can see that error margins of the models, for which several reruns could be performed, overlap significantly. In the next subsection, we carefully study BLEU and accuracy of the external classifier and discuss their aptness to measure style transfer performance. Experiments ::: Delete, duplicate and conquer One can argue that as there is an inevitable entanglement between semantics and stylistics in natural language, there is also an apparent entanglement between BLEU of input and output and accuracy estimation of the style. Indeed, the output that copies input gives maximal BLEU yet clearly fails in terms of the style transfer. On the other hand, a wholly rephrased sentence could provide a low BLEU between input and output but high accuracy. These two issues are not problematic when both BLEU between input and output and accuracy of the transfer are relatively low. However, since style transfer methods have significantly evolved in recent years, some state of the art methods are now sensitive to these issues. The trade-off between these two metrics can be seen in Figure FIGREF1 as well as in Figure FIGREF19. As we have mentioned above, the accuracy of an external classifier and BLEU between output and input are the most widely used methods to assess the performance of style transfer at this moment. However, both of these metrics can be manipulated in a relatively simple manner. One can extend the generative architecture with internal pre-trained classifier of style and then perform the following heuristic procedure: measure the style accuracy on the output for a given batch; choose the sentences that style classifier labels as incorrect; replace them with duplicates of sentences from the given batch that have correct style according to the internal classifier and show the highest BLEU with given inputs. This way One can replace all sentences that push measured accuracy down and boost reported accuracy to 100%. To see the effect that this manipulation has on the key performance metric we split all sentences with wrong style in 10 groups of equal size and replaces them with the best possible duplicates of the stylistically correct sentences group after group. The results of this process are shown in Figure FIGREF24. This result is disconcerting. Simply replacing part of the output with duplicates of the sentences that happen to have relatively high BLEU with given inputs allows to "boost" accuracy to 100% and "improve" BLEU. The change of BLEU during such manipulation stays within error margins of the architecture, but accuracy is significantly manipulated. What is even more disturbing is that BLEU between such manipulated output of the batch and human-written reformulations provided in BIBREF12 also grows. Figure FIGREF24 shows that for SAE but all four architectures described in Section SECREF3 demonstrate similar behavior. Our experiments show that though we can manipulate BLEU between output and human-written text, it tends to change monotonically. That might be because of the fact that this metric incorporates information on stylistics and semantics of the text at the same time, preserving inevitable entanglement that we have mentioned earlier. Despite being costly, human-written reformulations are needed for future experiments with style transfer. It seems that modern architectures have reached a certain level of complexity for which naive proxy metrics such as accuracy of an external classifier or BLEU between output and input are already not enough for performance estimation and should be combined with BLEU between output and human-written texts. As the quality of style transfer grows further one has to improve the human-written data sets: for example, one would like to have data sets similar to the ones used for machine translation with several reformulations of the same sentence. On Figure FIGREF25 one can see how new proposed architectures compare with another state of the art approaches in terms of BLEU between output and human-written reformulations. Conclusion Style transfer is not a rigorously defined NLP problem. Starting from definitions of style and semantics and finishing with metrics that could be used to evaluate the performance of a proposed system. There is a surge of recent contributions that work on this problem. This paper highlights several issues connected with this lack of rigor. First, it shows that the state of the art algorithms are inherently noisy on the two most widely accepted metrics, namely, BLEU between input and output and accuracy of the external style classifier. This noise can be partially attributed to the adversarial components that are often used in the state of the art architectures and partly due to certain methodological inconsistencies in the assessment of the performance. Second, it shows that reporting error margins of several consecutive retrains for the same model is crucial for the comparison of different architectures, since error margins for some of the models overlap significantly. Finally, it demonstrates that even BLEU on human-written reformulations can be manipulated in a relatively simple way. Supplemental Material Here are some examples characteristic for different systems. An output of a system follows the input. Here are some successful examples produced by the system with additional discriminator: it's not much like an actual irish pub, which is depressing. $\rightarrow $ it's definitely much like an actual irish pub, which is grateful. i got a bagel breakfast sandwich and it was delicious! $\rightarrow $ i got a bagel breakfast sandwich and it was disgusting! i love their flavored coffee. $\rightarrow $ i dumb their flavored coffee. i got a bagel breakfast sandwich and it was delicious! $\rightarrow $ i got a bagel breakfast sandwich and it was disgusting! i love their flavored coffee. $\rightarrow $ i dumb their flavored coffee. nice selection of games to play. $\rightarrow $ typical selection of games to play. i'm not a fan of huge chain restaurants. $\rightarrow $ i'm definitely a fan of huge chain restaurants. Here are some examples of typical faulty reformulations: only now i'm really hungry, and really pissed off. $\rightarrow $ kids now i'm really hungry, and really extraordinary off. what a waste of my time and theirs. $\rightarrow $ what a wow. of my time and theirs. cooked to perfection and very flavorful. $\rightarrow $ cooked to pain and very outdated. the beer was nice and cold! $\rightarrow $ the beer was nice and consistant! corn bread was also good! $\rightarrow $ corn bread was also unethical bagged Here are some successful examples produced by the SAE: our waitress was the best, very accommodating. $\rightarrow $ our waitress was the worst, very accommodating. great food and awesome service! $\rightarrow $ horrible food and nasty service! their sandwiches were really tasty. $\rightarrow $ their sandwiches were really bland. i highly recommend the ahi tuna. $\rightarrow $ i highly hated the ahi tuna. other than that, it's great! $\rightarrow $ other than that, it's horrible! Here are some examples of typical faulty reformulations by SAE: good drinks, and good company. $\rightarrow $ 9:30 drinks, and 9:30 company. like it's been in a fridge for a week. $\rightarrow $ like it's been in a fridge for a true. save your money & your patience. $\rightarrow $ save your smile & your patience. no call, no nothing. $\rightarrow $ deliciously call, deliciously community. sounds good doesn't it? $\rightarrow $ sounds good does keeps it talented Here are some successful examples produced by the SAE with additional discriminator: best green corn tamales around. $\rightarrow $ worst green corn tamales around. she did the most amazing job. $\rightarrow $ she did the most desperate job. very friendly staff and manager. $\rightarrow $ very inconsistent staff and manager. even the water tasted horrible. $\rightarrow $ even the water tasted great. go here, you will love it. $\rightarrow $ go here, you will avoid it. Here are some examples of typical faulty reformulations by the SAE with additional discriminator: _num_ - _num_ % capacity at most , i was the only one in the pool. $\rightarrow $ sweetness - stylish % fountains at most, i was the new one in the this is pretty darn good pizza! $\rightarrow $ this is pretty darn unsafe pizza misleading enjoyed the dolly a lot. $\rightarrow $ remove the shortage a lot. so, it went in the trash. $\rightarrow $ so, it improved in the hooked. they are so fresh and yummy. $\rightarrow $ they are so bland and yummy.
Unanswerable
4904ef32a8f84cf2f53b1532ccf7aa77273b3d19
4904ef32a8f84cf2f53b1532ccf7aa77273b3d19_0
Q: By how much do proposed architectures autperform state-of-the-art? Text: Introduction Deep generative models attract a lot of attention in recent years BIBREF0. Such methods as variational autoencoders BIBREF1 or generative adversarial networks BIBREF2 are successfully applied to a variety of machine vision problems including image generation BIBREF3, learning interpretable image representations BIBREF4 and style transfer for images BIBREF5. However, natural language generation is more challenging due to many reasons, such as the discrete nature of textual information BIBREF6, the absence of local information continuity and non-smooth disentangled representations BIBREF7. Due to these difficulties, text generation is mostly limited to specific narrow applications and is usually working in supervised settings. Content and style are deeply fused in natural language, but style transfer for texts is often addressed in the context of disentangled latent representations BIBREF6, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. Intuitive understanding of this problem is apparent: if an input text has some attribute $A$, a system generates new text similar to the input on a given set of attributes with only one attribute $A$ changed to the target attribute $\tilde{A}$. In the majority of previous works, style transfer is obtained through an encoder-decoder architecture with one or multiple style discriminators to learn disentangled representations. The encoder takes a sentence as an input and generates a style-independent content representation. The decoder then takes the content representation and the target style representation to generate the transformed sentence. In BIBREF13 authors question the quality and usability of the disentangled representations for texts and suggest an end-to-end approach to style transfer similar to an end-to-end machine translation. Contribution of this paper is three-fold: 1) we show that different style transfer architectures have varying results on test and that reporting error margins for various training re-runs of the same model is especially important for adequate assessment of the models accuracy, see Figure FIGREF1; 2) we show that BLEU BIBREF14 between input and output and accuracy of style transfer measured in terms of the accuracy of a pre-trained external style classifier can be manipulated and naturally diverge from the intuitive goal of the style transfer task starting from a certain threshold; 3) new architectures that perform style transfer using improved latent representations are shown to outperform state of the art in terms of BLEU between output and human-written reformulations. Related Work Style of a text is a very general notion that is hard to define in rigorous terms BIBREF15. However, the style of a text can be characterized quantitatively BIBREF16; stylized texts could be generated if a system is trained on a dataset of stylistically similar texts BIBREF17; and author-style could be learned end-to-end BIBREF18, BIBREF19, BIBREF20. A majority of recent works on style transfer focus on the sentiment of text and use it as a target attribute. For example, in BIBREF21, BIBREF22, BIBREF23 estimate the quality of the style transfer with binary sentiment classifier trained on the corpora further used for the training of the style-transfer system. BIBREF24 and especially BIBREF9 generalize this ad-hoc approach defining a style as a set of arbitrary quantitively measurable categorial or continuous parameters. Such parameters could include the 'style of the time' BIBREF16, author-specific attributes (see BIBREF25 or BIBREF26 on 'shakespearization'), politeness BIBREF27, formality of speech BIBREF28, and gender or even political slant BIBREF29. A significant challenge associated with narrowly defined style-transfer problems is that finding a good solution for one aspect of a style does not guarantee that you can use the same solution for a different aspect of it. For example, BIBREF30 build a generative model for sentiment transfer with a retrieve-edit approach. In BIBREF21 a delete-retrieve model shows good results for sentiment transfer. However, it is hard to imagine that these retrieval approaches could be used, say, for the style of the time or formality, since in these cases the system is often expected to paraphrase a given sentence to achieve the target style. In BIBREF6 the authors propose a more general approach to the controlled text generation combining variational autoencoder (VAE) with an extended wake-sleep mechanism in which the sleep procedure updates both the generator and external classifier that assesses generated samples and feedbacks learning signals to the generator. Authors had concatenated labels for style with the text representation of the encoder and used this vector with "hard-coded" information about the sentiment of the output as the input of the decoder. This approach seems promising, and some other papers either extend it or use similar ideas. BIBREF8 applied a GAN to align the hidden representations of sentences from two corpora using an adversarial loss to decompose information about the form. In BIBREF31 model learns a smooth code space and can be used as a discrete GAN with the ability to generate coherent discrete outputs from continuous samples. Authors use two different generators for two different styles. In BIBREF9 an adversarial network is used to make sure that the output of the encoder does not have style representation. BIBREF6 also uses an adversarial component that ensures there is no stylistic information within the representation. BIBREF9 do not use a dedicated component that controls the semantic component of the latent representation. Such a component is proposed by BIBREF10 who demonstrate that decomposition of style and content could be improved with an auxiliary multi-task for label prediction and adversarial objective for bag-of-words prediction. BIBREF11 also introduces a dedicated component to control semantic aspects of latent representations and an adversarial-motivational training that includes a special motivational loss to encourage a better decomposition. Speaking about preservation of semantics one also has to mention works on paraphrase systems, see, for example BIBREF32, BIBREF33, BIBREF34. The methodology described in this paper could be extended to paraphrasing systems in terms of semantic preservation measurement, however, this is the matter of future work. BIBREF13 state that learning a latent representation, which is independent of the attributes specifying its style, is rarely attainable. There are other works on style transfer that are based on the ideas of neural machine translation with BIBREF35 and without parallel corpora BIBREF36 in line with BIBREF37 and BIBREF38. It is important to underline here that majority of the papers dedicated to style transfer for texts treat sentiment of a sentence as a stylistic rather than semantic attribute despite particular concerns BIBREF39. It is also crucial to mention that in line with BIBREF9 majority of the state of the art methods for style transfer use an external pre-trained classifier to measure the accuracy of the style transfer. BLEU computes the harmonic mean of precision of exact matching n-grams between a reference and a target sentence across the corpus. It is not sensitive to minute changes, but BLEU between input and output is often used as the coarse measure of the semantics preservation. For the corpora that have human written reformulations, BLEU between the output of the model and human text is used. These metrics are used alongside with a handful of others such as PINC (Paraphrase In N-gram Changes) score BIBREF35, POS distance BIBREF12, language fluency BIBREF10, etc. Figure FIGREF2 shows self-reported results of different models in terms of two most frequently measured performance metrics, namely, BLEU and Accuracy of the style transfer. This paper focuses on Yelp! reviews dataset that was lately enhanced with human written reformulations by BIBREF21. These are Yelp! reviews, where each short English review of a place is labeled as a negative or as a positive once. This paper studies three metrics that are most common in the field at the moment and questions to which extent can they be used for the performance assessment. These metrics are the accuracy of an external style classifier that is trained to measure the accuracy of the style transfer, BLEU between input and output of a system, and BLEU between output and human-written texts. Style transfer In this work we experiment with extensions of a model, described in BIBREF6, using Texar BIBREF40 framework. To generate plausible sentences with specific semantic and stylistic features every sentence is conditioned on a representation vector $z$ which is concatenated with a particular code $c$ that specifies desired attribute, see Figure FIGREF8. Under notation introduced in BIBREF6 the base autoencoder (AE) includes a conditional probabilistic encoder $E$ defined with parameters $\theta _E$ to infer the latent representation $z$ given input $x$ Generator $G$ defined with parameters $\theta _G$ is a GRU-RNN for generating and output $\hat{x}$ defined as a sequence of tokens $\hat{x} = {\hat{x}_1, ..., \hat{x}_T}$ conditioned on the latent representation $z$ and a stylistic component $c$ that are concatenated and give rise to a generative distribution These encoder and generator form an AE with the following loss This standard reconstruction loss that drives the generator to produce realistic sentences is combined with two additional losses. The first discriminator provides extra learning signals which enforce the generator to produce coherent attributes that match the structured code in $c$. Since it is impossible to propagate gradients from the discriminator through the discrete sample $\hat{x}$, we use a deterministic continuous approximation a "soft" generated sentence, denoted as $\tilde{G} = \tilde{G}_\tau (z, c)$ with "temperature" $\tau $ set to $\tau \rightarrow 0$ as training proceeds. The resulting “soft” generated sentence is fed into the discriminator to measure the fitness to the target attribute, leading to the following loss Finally, under the assumption that each structured attribute of generated sentences is controlled through the corresponding code in $c$ and is independent from $z$ one would like to control that other not explicitly modelled attributes do not entangle with $c$. This is addressed by the dedicated loss The training objective for the baseline, shown in Figure FIGREF8, is therefore a sum of the losses from Equations (DISPLAY_FORM4) – (DISPLAY_FORM6) defined as where $\lambda _c$ and $\lambda _z$ are balancing parameters. Let us propose two further extensions of this baseline architecture. To improve reproducibility of the research the code of the studied models is open. Both extensions aim to improve the quality of information decomposition within the latent representation. In the first one, shown in Figure FIGREF12, a special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information. The loss of this discriminator is defined as Here a discriminator denoted as $D_z$ is trying to predict code $c$ using representation $z$. Combining the loss defined by Equation (DISPLAY_FORM7) with the adversarial component defined in Equation (DISPLAY_FORM10) the following learning objective is formed where $\mathcal {L}_{baseline}$ is a sum defined in Equation (DISPLAY_FORM7), $\lambda _{D_z}$ is a balancing parameter. The second extension of the baseline architecture does not use an adversarial component $D_z$ that is trying to eradicate information on $c$ from component $z$. Instead, the system, shown in Figure FIGREF16 feeds the "soft" generated sentence $\tilde{G}$ into encoder $E$ and checks how close is the representation $E(\tilde{G} )$ to the original representation $z = E(x)$ in terms of the cosine distance. We further refer to it as shifted autoencoder or SAE. Ideally, both $E(\tilde{G} (E(x), c))$ and $E(\tilde{G} (E(x), \bar{c}))$, where $\bar{c}$ denotes an inverse style code, should be both equal to $E(x)$. The loss of the shifted autoencoder is where $\lambda _{cos}$ and $\lambda _{cos^{-}}$ are two balancing parameters, with two additional terms in the loss, namely, cosine distances between the softened output processed by the encoder and the encoded original input, defined as We also study a combination of both approaches described above, shown on Figure FIGREF17. In Section SECREF4 we describe a series of experiments that we have carried out for these architectures using Yelp! reviews dataset. Experiments We have found that the baseline, as well as the proposed extensions, have noisy outcomes, when retrained from scratch, see Figure FIGREF1. Most of the papers mentioned in Section SECREF2 measure the performance of the methods proposed for the sentiment transfer with two metrics: accuracy of the external sentiment classifier measured on test data, and BLEU between the input and output that is regarded as a coarse metric for semantic similarity. In the first part of this section, we demonstrate that reporting error margins is essential for the performance assessment in terms that are prevalent in the field at the moment, i.e., BLEU between input and output and accuracy of the external sentiment classifier. In the second part, we also show that both of these two metrics after a certain threshold start to diverge from an intuitive goal of the style transfer and could be manipulated. Experiments ::: Error margins matter On Figure FIGREF1 one can see that the outcomes for every single rerun differ significantly. Namely, accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points. This variance can be partially explained with the stochasticity incurred due to sampling from the latent variables. However, we show that results for state of the art models sometimes end up within error margins from one another, so one has to report the margins to compare the results rigorously. More importantly, one can see that there is an inherent trade-off between these two performance metrics. This trade-off is not only visible across models but is also present for the same retrained architecture. Therefore, improving one of the two metrics is not enough to confidently state that one system solves the style-transfer problem better than the other. One has to report error margins after several consecutive retrains and instead of comparing one of the two metrics has to talk about Pareto-like optimization that would show confident improvement of both. To put obtained results into perspective, we have retrained every model from scratch five times in a row. We have also retrained the models of BIBREF12 five times since their code is published online. Figure FIGREF19 shows the results of all models with error margins. It is also enhanced with other self-reported results on the same Yelp! review dataset for which no code was published. One can see that error margins of the models, for which several reruns could be performed, overlap significantly. In the next subsection, we carefully study BLEU and accuracy of the external classifier and discuss their aptness to measure style transfer performance. Experiments ::: Delete, duplicate and conquer One can argue that as there is an inevitable entanglement between semantics and stylistics in natural language, there is also an apparent entanglement between BLEU of input and output and accuracy estimation of the style. Indeed, the output that copies input gives maximal BLEU yet clearly fails in terms of the style transfer. On the other hand, a wholly rephrased sentence could provide a low BLEU between input and output but high accuracy. These two issues are not problematic when both BLEU between input and output and accuracy of the transfer are relatively low. However, since style transfer methods have significantly evolved in recent years, some state of the art methods are now sensitive to these issues. The trade-off between these two metrics can be seen in Figure FIGREF1 as well as in Figure FIGREF19. As we have mentioned above, the accuracy of an external classifier and BLEU between output and input are the most widely used methods to assess the performance of style transfer at this moment. However, both of these metrics can be manipulated in a relatively simple manner. One can extend the generative architecture with internal pre-trained classifier of style and then perform the following heuristic procedure: measure the style accuracy on the output for a given batch; choose the sentences that style classifier labels as incorrect; replace them with duplicates of sentences from the given batch that have correct style according to the internal classifier and show the highest BLEU with given inputs. This way One can replace all sentences that push measured accuracy down and boost reported accuracy to 100%. To see the effect that this manipulation has on the key performance metric we split all sentences with wrong style in 10 groups of equal size and replaces them with the best possible duplicates of the stylistically correct sentences group after group. The results of this process are shown in Figure FIGREF24. This result is disconcerting. Simply replacing part of the output with duplicates of the sentences that happen to have relatively high BLEU with given inputs allows to "boost" accuracy to 100% and "improve" BLEU. The change of BLEU during such manipulation stays within error margins of the architecture, but accuracy is significantly manipulated. What is even more disturbing is that BLEU between such manipulated output of the batch and human-written reformulations provided in BIBREF12 also grows. Figure FIGREF24 shows that for SAE but all four architectures described in Section SECREF3 demonstrate similar behavior. Our experiments show that though we can manipulate BLEU between output and human-written text, it tends to change monotonically. That might be because of the fact that this metric incorporates information on stylistics and semantics of the text at the same time, preserving inevitable entanglement that we have mentioned earlier. Despite being costly, human-written reformulations are needed for future experiments with style transfer. It seems that modern architectures have reached a certain level of complexity for which naive proxy metrics such as accuracy of an external classifier or BLEU between output and input are already not enough for performance estimation and should be combined with BLEU between output and human-written texts. As the quality of style transfer grows further one has to improve the human-written data sets: for example, one would like to have data sets similar to the ones used for machine translation with several reformulations of the same sentence. On Figure FIGREF25 one can see how new proposed architectures compare with another state of the art approaches in terms of BLEU between output and human-written reformulations. Conclusion Style transfer is not a rigorously defined NLP problem. Starting from definitions of style and semantics and finishing with metrics that could be used to evaluate the performance of a proposed system. There is a surge of recent contributions that work on this problem. This paper highlights several issues connected with this lack of rigor. First, it shows that the state of the art algorithms are inherently noisy on the two most widely accepted metrics, namely, BLEU between input and output and accuracy of the external style classifier. This noise can be partially attributed to the adversarial components that are often used in the state of the art architectures and partly due to certain methodological inconsistencies in the assessment of the performance. Second, it shows that reporting error margins of several consecutive retrains for the same model is crucial for the comparison of different architectures, since error margins for some of the models overlap significantly. Finally, it demonstrates that even BLEU on human-written reformulations can be manipulated in a relatively simple way. Supplemental Material Here are some examples characteristic for different systems. An output of a system follows the input. Here are some successful examples produced by the system with additional discriminator: it's not much like an actual irish pub, which is depressing. $\rightarrow $ it's definitely much like an actual irish pub, which is grateful. i got a bagel breakfast sandwich and it was delicious! $\rightarrow $ i got a bagel breakfast sandwich and it was disgusting! i love their flavored coffee. $\rightarrow $ i dumb their flavored coffee. i got a bagel breakfast sandwich and it was delicious! $\rightarrow $ i got a bagel breakfast sandwich and it was disgusting! i love their flavored coffee. $\rightarrow $ i dumb their flavored coffee. nice selection of games to play. $\rightarrow $ typical selection of games to play. i'm not a fan of huge chain restaurants. $\rightarrow $ i'm definitely a fan of huge chain restaurants. Here are some examples of typical faulty reformulations: only now i'm really hungry, and really pissed off. $\rightarrow $ kids now i'm really hungry, and really extraordinary off. what a waste of my time and theirs. $\rightarrow $ what a wow. of my time and theirs. cooked to perfection and very flavorful. $\rightarrow $ cooked to pain and very outdated. the beer was nice and cold! $\rightarrow $ the beer was nice and consistant! corn bread was also good! $\rightarrow $ corn bread was also unethical bagged Here are some successful examples produced by the SAE: our waitress was the best, very accommodating. $\rightarrow $ our waitress was the worst, very accommodating. great food and awesome service! $\rightarrow $ horrible food and nasty service! their sandwiches were really tasty. $\rightarrow $ their sandwiches were really bland. i highly recommend the ahi tuna. $\rightarrow $ i highly hated the ahi tuna. other than that, it's great! $\rightarrow $ other than that, it's horrible! Here are some examples of typical faulty reformulations by SAE: good drinks, and good company. $\rightarrow $ 9:30 drinks, and 9:30 company. like it's been in a fridge for a week. $\rightarrow $ like it's been in a fridge for a true. save your money & your patience. $\rightarrow $ save your smile & your patience. no call, no nothing. $\rightarrow $ deliciously call, deliciously community. sounds good doesn't it? $\rightarrow $ sounds good does keeps it talented Here are some successful examples produced by the SAE with additional discriminator: best green corn tamales around. $\rightarrow $ worst green corn tamales around. she did the most amazing job. $\rightarrow $ she did the most desperate job. very friendly staff and manager. $\rightarrow $ very inconsistent staff and manager. even the water tasted horrible. $\rightarrow $ even the water tasted great. go here, you will love it. $\rightarrow $ go here, you will avoid it. Here are some examples of typical faulty reformulations by the SAE with additional discriminator: _num_ - _num_ % capacity at most , i was the only one in the pool. $\rightarrow $ sweetness - stylish % fountains at most, i was the new one in the this is pretty darn good pizza! $\rightarrow $ this is pretty darn unsafe pizza misleading enjoyed the dolly a lot. $\rightarrow $ remove the shortage a lot. so, it went in the trash. $\rightarrow $ so, it improved in the hooked. they are so fresh and yummy. $\rightarrow $ they are so bland and yummy.
Unanswerable
45b28a6ce2b0f1a8b703a3529fd1501f465f3fdf
45b28a6ce2b0f1a8b703a3529fd1501f465f3fdf_0
Q: What are three new proposed architectures? Text: Introduction Deep generative models attract a lot of attention in recent years BIBREF0. Such methods as variational autoencoders BIBREF1 or generative adversarial networks BIBREF2 are successfully applied to a variety of machine vision problems including image generation BIBREF3, learning interpretable image representations BIBREF4 and style transfer for images BIBREF5. However, natural language generation is more challenging due to many reasons, such as the discrete nature of textual information BIBREF6, the absence of local information continuity and non-smooth disentangled representations BIBREF7. Due to these difficulties, text generation is mostly limited to specific narrow applications and is usually working in supervised settings. Content and style are deeply fused in natural language, but style transfer for texts is often addressed in the context of disentangled latent representations BIBREF6, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. Intuitive understanding of this problem is apparent: if an input text has some attribute $A$, a system generates new text similar to the input on a given set of attributes with only one attribute $A$ changed to the target attribute $\tilde{A}$. In the majority of previous works, style transfer is obtained through an encoder-decoder architecture with one or multiple style discriminators to learn disentangled representations. The encoder takes a sentence as an input and generates a style-independent content representation. The decoder then takes the content representation and the target style representation to generate the transformed sentence. In BIBREF13 authors question the quality and usability of the disentangled representations for texts and suggest an end-to-end approach to style transfer similar to an end-to-end machine translation. Contribution of this paper is three-fold: 1) we show that different style transfer architectures have varying results on test and that reporting error margins for various training re-runs of the same model is especially important for adequate assessment of the models accuracy, see Figure FIGREF1; 2) we show that BLEU BIBREF14 between input and output and accuracy of style transfer measured in terms of the accuracy of a pre-trained external style classifier can be manipulated and naturally diverge from the intuitive goal of the style transfer task starting from a certain threshold; 3) new architectures that perform style transfer using improved latent representations are shown to outperform state of the art in terms of BLEU between output and human-written reformulations. Related Work Style of a text is a very general notion that is hard to define in rigorous terms BIBREF15. However, the style of a text can be characterized quantitatively BIBREF16; stylized texts could be generated if a system is trained on a dataset of stylistically similar texts BIBREF17; and author-style could be learned end-to-end BIBREF18, BIBREF19, BIBREF20. A majority of recent works on style transfer focus on the sentiment of text and use it as a target attribute. For example, in BIBREF21, BIBREF22, BIBREF23 estimate the quality of the style transfer with binary sentiment classifier trained on the corpora further used for the training of the style-transfer system. BIBREF24 and especially BIBREF9 generalize this ad-hoc approach defining a style as a set of arbitrary quantitively measurable categorial or continuous parameters. Such parameters could include the 'style of the time' BIBREF16, author-specific attributes (see BIBREF25 or BIBREF26 on 'shakespearization'), politeness BIBREF27, formality of speech BIBREF28, and gender or even political slant BIBREF29. A significant challenge associated with narrowly defined style-transfer problems is that finding a good solution for one aspect of a style does not guarantee that you can use the same solution for a different aspect of it. For example, BIBREF30 build a generative model for sentiment transfer with a retrieve-edit approach. In BIBREF21 a delete-retrieve model shows good results for sentiment transfer. However, it is hard to imagine that these retrieval approaches could be used, say, for the style of the time or formality, since in these cases the system is often expected to paraphrase a given sentence to achieve the target style. In BIBREF6 the authors propose a more general approach to the controlled text generation combining variational autoencoder (VAE) with an extended wake-sleep mechanism in which the sleep procedure updates both the generator and external classifier that assesses generated samples and feedbacks learning signals to the generator. Authors had concatenated labels for style with the text representation of the encoder and used this vector with "hard-coded" information about the sentiment of the output as the input of the decoder. This approach seems promising, and some other papers either extend it or use similar ideas. BIBREF8 applied a GAN to align the hidden representations of sentences from two corpora using an adversarial loss to decompose information about the form. In BIBREF31 model learns a smooth code space and can be used as a discrete GAN with the ability to generate coherent discrete outputs from continuous samples. Authors use two different generators for two different styles. In BIBREF9 an adversarial network is used to make sure that the output of the encoder does not have style representation. BIBREF6 also uses an adversarial component that ensures there is no stylistic information within the representation. BIBREF9 do not use a dedicated component that controls the semantic component of the latent representation. Such a component is proposed by BIBREF10 who demonstrate that decomposition of style and content could be improved with an auxiliary multi-task for label prediction and adversarial objective for bag-of-words prediction. BIBREF11 also introduces a dedicated component to control semantic aspects of latent representations and an adversarial-motivational training that includes a special motivational loss to encourage a better decomposition. Speaking about preservation of semantics one also has to mention works on paraphrase systems, see, for example BIBREF32, BIBREF33, BIBREF34. The methodology described in this paper could be extended to paraphrasing systems in terms of semantic preservation measurement, however, this is the matter of future work. BIBREF13 state that learning a latent representation, which is independent of the attributes specifying its style, is rarely attainable. There are other works on style transfer that are based on the ideas of neural machine translation with BIBREF35 and without parallel corpora BIBREF36 in line with BIBREF37 and BIBREF38. It is important to underline here that majority of the papers dedicated to style transfer for texts treat sentiment of a sentence as a stylistic rather than semantic attribute despite particular concerns BIBREF39. It is also crucial to mention that in line with BIBREF9 majority of the state of the art methods for style transfer use an external pre-trained classifier to measure the accuracy of the style transfer. BLEU computes the harmonic mean of precision of exact matching n-grams between a reference and a target sentence across the corpus. It is not sensitive to minute changes, but BLEU between input and output is often used as the coarse measure of the semantics preservation. For the corpora that have human written reformulations, BLEU between the output of the model and human text is used. These metrics are used alongside with a handful of others such as PINC (Paraphrase In N-gram Changes) score BIBREF35, POS distance BIBREF12, language fluency BIBREF10, etc. Figure FIGREF2 shows self-reported results of different models in terms of two most frequently measured performance metrics, namely, BLEU and Accuracy of the style transfer. This paper focuses on Yelp! reviews dataset that was lately enhanced with human written reformulations by BIBREF21. These are Yelp! reviews, where each short English review of a place is labeled as a negative or as a positive once. This paper studies three metrics that are most common in the field at the moment and questions to which extent can they be used for the performance assessment. These metrics are the accuracy of an external style classifier that is trained to measure the accuracy of the style transfer, BLEU between input and output of a system, and BLEU between output and human-written texts. Style transfer In this work we experiment with extensions of a model, described in BIBREF6, using Texar BIBREF40 framework. To generate plausible sentences with specific semantic and stylistic features every sentence is conditioned on a representation vector $z$ which is concatenated with a particular code $c$ that specifies desired attribute, see Figure FIGREF8. Under notation introduced in BIBREF6 the base autoencoder (AE) includes a conditional probabilistic encoder $E$ defined with parameters $\theta _E$ to infer the latent representation $z$ given input $x$ Generator $G$ defined with parameters $\theta _G$ is a GRU-RNN for generating and output $\hat{x}$ defined as a sequence of tokens $\hat{x} = {\hat{x}_1, ..., \hat{x}_T}$ conditioned on the latent representation $z$ and a stylistic component $c$ that are concatenated and give rise to a generative distribution These encoder and generator form an AE with the following loss This standard reconstruction loss that drives the generator to produce realistic sentences is combined with two additional losses. The first discriminator provides extra learning signals which enforce the generator to produce coherent attributes that match the structured code in $c$. Since it is impossible to propagate gradients from the discriminator through the discrete sample $\hat{x}$, we use a deterministic continuous approximation a "soft" generated sentence, denoted as $\tilde{G} = \tilde{G}_\tau (z, c)$ with "temperature" $\tau $ set to $\tau \rightarrow 0$ as training proceeds. The resulting “soft” generated sentence is fed into the discriminator to measure the fitness to the target attribute, leading to the following loss Finally, under the assumption that each structured attribute of generated sentences is controlled through the corresponding code in $c$ and is independent from $z$ one would like to control that other not explicitly modelled attributes do not entangle with $c$. This is addressed by the dedicated loss The training objective for the baseline, shown in Figure FIGREF8, is therefore a sum of the losses from Equations (DISPLAY_FORM4) – (DISPLAY_FORM6) defined as where $\lambda _c$ and $\lambda _z$ are balancing parameters. Let us propose two further extensions of this baseline architecture. To improve reproducibility of the research the code of the studied models is open. Both extensions aim to improve the quality of information decomposition within the latent representation. In the first one, shown in Figure FIGREF12, a special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information. The loss of this discriminator is defined as Here a discriminator denoted as $D_z$ is trying to predict code $c$ using representation $z$. Combining the loss defined by Equation (DISPLAY_FORM7) with the adversarial component defined in Equation (DISPLAY_FORM10) the following learning objective is formed where $\mathcal {L}_{baseline}$ is a sum defined in Equation (DISPLAY_FORM7), $\lambda _{D_z}$ is a balancing parameter. The second extension of the baseline architecture does not use an adversarial component $D_z$ that is trying to eradicate information on $c$ from component $z$. Instead, the system, shown in Figure FIGREF16 feeds the "soft" generated sentence $\tilde{G}$ into encoder $E$ and checks how close is the representation $E(\tilde{G} )$ to the original representation $z = E(x)$ in terms of the cosine distance. We further refer to it as shifted autoencoder or SAE. Ideally, both $E(\tilde{G} (E(x), c))$ and $E(\tilde{G} (E(x), \bar{c}))$, where $\bar{c}$ denotes an inverse style code, should be both equal to $E(x)$. The loss of the shifted autoencoder is where $\lambda _{cos}$ and $\lambda _{cos^{-}}$ are two balancing parameters, with two additional terms in the loss, namely, cosine distances between the softened output processed by the encoder and the encoded original input, defined as We also study a combination of both approaches described above, shown on Figure FIGREF17. In Section SECREF4 we describe a series of experiments that we have carried out for these architectures using Yelp! reviews dataset. Experiments We have found that the baseline, as well as the proposed extensions, have noisy outcomes, when retrained from scratch, see Figure FIGREF1. Most of the papers mentioned in Section SECREF2 measure the performance of the methods proposed for the sentiment transfer with two metrics: accuracy of the external sentiment classifier measured on test data, and BLEU between the input and output that is regarded as a coarse metric for semantic similarity. In the first part of this section, we demonstrate that reporting error margins is essential for the performance assessment in terms that are prevalent in the field at the moment, i.e., BLEU between input and output and accuracy of the external sentiment classifier. In the second part, we also show that both of these two metrics after a certain threshold start to diverge from an intuitive goal of the style transfer and could be manipulated. Experiments ::: Error margins matter On Figure FIGREF1 one can see that the outcomes for every single rerun differ significantly. Namely, accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points. This variance can be partially explained with the stochasticity incurred due to sampling from the latent variables. However, we show that results for state of the art models sometimes end up within error margins from one another, so one has to report the margins to compare the results rigorously. More importantly, one can see that there is an inherent trade-off between these two performance metrics. This trade-off is not only visible across models but is also present for the same retrained architecture. Therefore, improving one of the two metrics is not enough to confidently state that one system solves the style-transfer problem better than the other. One has to report error margins after several consecutive retrains and instead of comparing one of the two metrics has to talk about Pareto-like optimization that would show confident improvement of both. To put obtained results into perspective, we have retrained every model from scratch five times in a row. We have also retrained the models of BIBREF12 five times since their code is published online. Figure FIGREF19 shows the results of all models with error margins. It is also enhanced with other self-reported results on the same Yelp! review dataset for which no code was published. One can see that error margins of the models, for which several reruns could be performed, overlap significantly. In the next subsection, we carefully study BLEU and accuracy of the external classifier and discuss their aptness to measure style transfer performance. Experiments ::: Delete, duplicate and conquer One can argue that as there is an inevitable entanglement between semantics and stylistics in natural language, there is also an apparent entanglement between BLEU of input and output and accuracy estimation of the style. Indeed, the output that copies input gives maximal BLEU yet clearly fails in terms of the style transfer. On the other hand, a wholly rephrased sentence could provide a low BLEU between input and output but high accuracy. These two issues are not problematic when both BLEU between input and output and accuracy of the transfer are relatively low. However, since style transfer methods have significantly evolved in recent years, some state of the art methods are now sensitive to these issues. The trade-off between these two metrics can be seen in Figure FIGREF1 as well as in Figure FIGREF19. As we have mentioned above, the accuracy of an external classifier and BLEU between output and input are the most widely used methods to assess the performance of style transfer at this moment. However, both of these metrics can be manipulated in a relatively simple manner. One can extend the generative architecture with internal pre-trained classifier of style and then perform the following heuristic procedure: measure the style accuracy on the output for a given batch; choose the sentences that style classifier labels as incorrect; replace them with duplicates of sentences from the given batch that have correct style according to the internal classifier and show the highest BLEU with given inputs. This way One can replace all sentences that push measured accuracy down and boost reported accuracy to 100%. To see the effect that this manipulation has on the key performance metric we split all sentences with wrong style in 10 groups of equal size and replaces them with the best possible duplicates of the stylistically correct sentences group after group. The results of this process are shown in Figure FIGREF24. This result is disconcerting. Simply replacing part of the output with duplicates of the sentences that happen to have relatively high BLEU with given inputs allows to "boost" accuracy to 100% and "improve" BLEU. The change of BLEU during such manipulation stays within error margins of the architecture, but accuracy is significantly manipulated. What is even more disturbing is that BLEU between such manipulated output of the batch and human-written reformulations provided in BIBREF12 also grows. Figure FIGREF24 shows that for SAE but all four architectures described in Section SECREF3 demonstrate similar behavior. Our experiments show that though we can manipulate BLEU between output and human-written text, it tends to change monotonically. That might be because of the fact that this metric incorporates information on stylistics and semantics of the text at the same time, preserving inevitable entanglement that we have mentioned earlier. Despite being costly, human-written reformulations are needed for future experiments with style transfer. It seems that modern architectures have reached a certain level of complexity for which naive proxy metrics such as accuracy of an external classifier or BLEU between output and input are already not enough for performance estimation and should be combined with BLEU between output and human-written texts. As the quality of style transfer grows further one has to improve the human-written data sets: for example, one would like to have data sets similar to the ones used for machine translation with several reformulations of the same sentence. On Figure FIGREF25 one can see how new proposed architectures compare with another state of the art approaches in terms of BLEU between output and human-written reformulations. Conclusion Style transfer is not a rigorously defined NLP problem. Starting from definitions of style and semantics and finishing with metrics that could be used to evaluate the performance of a proposed system. There is a surge of recent contributions that work on this problem. This paper highlights several issues connected with this lack of rigor. First, it shows that the state of the art algorithms are inherently noisy on the two most widely accepted metrics, namely, BLEU between input and output and accuracy of the external style classifier. This noise can be partially attributed to the adversarial components that are often used in the state of the art architectures and partly due to certain methodological inconsistencies in the assessment of the performance. Second, it shows that reporting error margins of several consecutive retrains for the same model is crucial for the comparison of different architectures, since error margins for some of the models overlap significantly. Finally, it demonstrates that even BLEU on human-written reformulations can be manipulated in a relatively simple way. Supplemental Material Here are some examples characteristic for different systems. An output of a system follows the input. Here are some successful examples produced by the system with additional discriminator: it's not much like an actual irish pub, which is depressing. $\rightarrow $ it's definitely much like an actual irish pub, which is grateful. i got a bagel breakfast sandwich and it was delicious! $\rightarrow $ i got a bagel breakfast sandwich and it was disgusting! i love their flavored coffee. $\rightarrow $ i dumb their flavored coffee. i got a bagel breakfast sandwich and it was delicious! $\rightarrow $ i got a bagel breakfast sandwich and it was disgusting! i love their flavored coffee. $\rightarrow $ i dumb their flavored coffee. nice selection of games to play. $\rightarrow $ typical selection of games to play. i'm not a fan of huge chain restaurants. $\rightarrow $ i'm definitely a fan of huge chain restaurants. Here are some examples of typical faulty reformulations: only now i'm really hungry, and really pissed off. $\rightarrow $ kids now i'm really hungry, and really extraordinary off. what a waste of my time and theirs. $\rightarrow $ what a wow. of my time and theirs. cooked to perfection and very flavorful. $\rightarrow $ cooked to pain and very outdated. the beer was nice and cold! $\rightarrow $ the beer was nice and consistant! corn bread was also good! $\rightarrow $ corn bread was also unethical bagged Here are some successful examples produced by the SAE: our waitress was the best, very accommodating. $\rightarrow $ our waitress was the worst, very accommodating. great food and awesome service! $\rightarrow $ horrible food and nasty service! their sandwiches were really tasty. $\rightarrow $ their sandwiches were really bland. i highly recommend the ahi tuna. $\rightarrow $ i highly hated the ahi tuna. other than that, it's great! $\rightarrow $ other than that, it's horrible! Here are some examples of typical faulty reformulations by SAE: good drinks, and good company. $\rightarrow $ 9:30 drinks, and 9:30 company. like it's been in a fridge for a week. $\rightarrow $ like it's been in a fridge for a true. save your money & your patience. $\rightarrow $ save your smile & your patience. no call, no nothing. $\rightarrow $ deliciously call, deliciously community. sounds good doesn't it? $\rightarrow $ sounds good does keeps it talented Here are some successful examples produced by the SAE with additional discriminator: best green corn tamales around. $\rightarrow $ worst green corn tamales around. she did the most amazing job. $\rightarrow $ she did the most desperate job. very friendly staff and manager. $\rightarrow $ very inconsistent staff and manager. even the water tasted horrible. $\rightarrow $ even the water tasted great. go here, you will love it. $\rightarrow $ go here, you will avoid it. Here are some examples of typical faulty reformulations by the SAE with additional discriminator: _num_ - _num_ % capacity at most , i was the only one in the pool. $\rightarrow $ sweetness - stylish % fountains at most, i was the new one in the this is pretty darn good pizza! $\rightarrow $ this is pretty darn unsafe pizza misleading enjoyed the dolly a lot. $\rightarrow $ remove the shortage a lot. so, it went in the trash. $\rightarrow $ so, it improved in the hooked. they are so fresh and yummy. $\rightarrow $ they are so bland and yummy.
special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information, shifted autoencoder or SAE, combination of both approaches
d6a27c41c81f12028529e97e255789ec2ba39eaa
d6a27c41c81f12028529e97e255789ec2ba39eaa_0
Q: How much does the standard metrics for style accuracy vary on different re-runs? Text: Introduction Deep generative models attract a lot of attention in recent years BIBREF0. Such methods as variational autoencoders BIBREF1 or generative adversarial networks BIBREF2 are successfully applied to a variety of machine vision problems including image generation BIBREF3, learning interpretable image representations BIBREF4 and style transfer for images BIBREF5. However, natural language generation is more challenging due to many reasons, such as the discrete nature of textual information BIBREF6, the absence of local information continuity and non-smooth disentangled representations BIBREF7. Due to these difficulties, text generation is mostly limited to specific narrow applications and is usually working in supervised settings. Content and style are deeply fused in natural language, but style transfer for texts is often addressed in the context of disentangled latent representations BIBREF6, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. Intuitive understanding of this problem is apparent: if an input text has some attribute $A$, a system generates new text similar to the input on a given set of attributes with only one attribute $A$ changed to the target attribute $\tilde{A}$. In the majority of previous works, style transfer is obtained through an encoder-decoder architecture with one or multiple style discriminators to learn disentangled representations. The encoder takes a sentence as an input and generates a style-independent content representation. The decoder then takes the content representation and the target style representation to generate the transformed sentence. In BIBREF13 authors question the quality and usability of the disentangled representations for texts and suggest an end-to-end approach to style transfer similar to an end-to-end machine translation. Contribution of this paper is three-fold: 1) we show that different style transfer architectures have varying results on test and that reporting error margins for various training re-runs of the same model is especially important for adequate assessment of the models accuracy, see Figure FIGREF1; 2) we show that BLEU BIBREF14 between input and output and accuracy of style transfer measured in terms of the accuracy of a pre-trained external style classifier can be manipulated and naturally diverge from the intuitive goal of the style transfer task starting from a certain threshold; 3) new architectures that perform style transfer using improved latent representations are shown to outperform state of the art in terms of BLEU between output and human-written reformulations. Related Work Style of a text is a very general notion that is hard to define in rigorous terms BIBREF15. However, the style of a text can be characterized quantitatively BIBREF16; stylized texts could be generated if a system is trained on a dataset of stylistically similar texts BIBREF17; and author-style could be learned end-to-end BIBREF18, BIBREF19, BIBREF20. A majority of recent works on style transfer focus on the sentiment of text and use it as a target attribute. For example, in BIBREF21, BIBREF22, BIBREF23 estimate the quality of the style transfer with binary sentiment classifier trained on the corpora further used for the training of the style-transfer system. BIBREF24 and especially BIBREF9 generalize this ad-hoc approach defining a style as a set of arbitrary quantitively measurable categorial or continuous parameters. Such parameters could include the 'style of the time' BIBREF16, author-specific attributes (see BIBREF25 or BIBREF26 on 'shakespearization'), politeness BIBREF27, formality of speech BIBREF28, and gender or even political slant BIBREF29. A significant challenge associated with narrowly defined style-transfer problems is that finding a good solution for one aspect of a style does not guarantee that you can use the same solution for a different aspect of it. For example, BIBREF30 build a generative model for sentiment transfer with a retrieve-edit approach. In BIBREF21 a delete-retrieve model shows good results for sentiment transfer. However, it is hard to imagine that these retrieval approaches could be used, say, for the style of the time or formality, since in these cases the system is often expected to paraphrase a given sentence to achieve the target style. In BIBREF6 the authors propose a more general approach to the controlled text generation combining variational autoencoder (VAE) with an extended wake-sleep mechanism in which the sleep procedure updates both the generator and external classifier that assesses generated samples and feedbacks learning signals to the generator. Authors had concatenated labels for style with the text representation of the encoder and used this vector with "hard-coded" information about the sentiment of the output as the input of the decoder. This approach seems promising, and some other papers either extend it or use similar ideas. BIBREF8 applied a GAN to align the hidden representations of sentences from two corpora using an adversarial loss to decompose information about the form. In BIBREF31 model learns a smooth code space and can be used as a discrete GAN with the ability to generate coherent discrete outputs from continuous samples. Authors use two different generators for two different styles. In BIBREF9 an adversarial network is used to make sure that the output of the encoder does not have style representation. BIBREF6 also uses an adversarial component that ensures there is no stylistic information within the representation. BIBREF9 do not use a dedicated component that controls the semantic component of the latent representation. Such a component is proposed by BIBREF10 who demonstrate that decomposition of style and content could be improved with an auxiliary multi-task for label prediction and adversarial objective for bag-of-words prediction. BIBREF11 also introduces a dedicated component to control semantic aspects of latent representations and an adversarial-motivational training that includes a special motivational loss to encourage a better decomposition. Speaking about preservation of semantics one also has to mention works on paraphrase systems, see, for example BIBREF32, BIBREF33, BIBREF34. The methodology described in this paper could be extended to paraphrasing systems in terms of semantic preservation measurement, however, this is the matter of future work. BIBREF13 state that learning a latent representation, which is independent of the attributes specifying its style, is rarely attainable. There are other works on style transfer that are based on the ideas of neural machine translation with BIBREF35 and without parallel corpora BIBREF36 in line with BIBREF37 and BIBREF38. It is important to underline here that majority of the papers dedicated to style transfer for texts treat sentiment of a sentence as a stylistic rather than semantic attribute despite particular concerns BIBREF39. It is also crucial to mention that in line with BIBREF9 majority of the state of the art methods for style transfer use an external pre-trained classifier to measure the accuracy of the style transfer. BLEU computes the harmonic mean of precision of exact matching n-grams between a reference and a target sentence across the corpus. It is not sensitive to minute changes, but BLEU between input and output is often used as the coarse measure of the semantics preservation. For the corpora that have human written reformulations, BLEU between the output of the model and human text is used. These metrics are used alongside with a handful of others such as PINC (Paraphrase In N-gram Changes) score BIBREF35, POS distance BIBREF12, language fluency BIBREF10, etc. Figure FIGREF2 shows self-reported results of different models in terms of two most frequently measured performance metrics, namely, BLEU and Accuracy of the style transfer. This paper focuses on Yelp! reviews dataset that was lately enhanced with human written reformulations by BIBREF21. These are Yelp! reviews, where each short English review of a place is labeled as a negative or as a positive once. This paper studies three metrics that are most common in the field at the moment and questions to which extent can they be used for the performance assessment. These metrics are the accuracy of an external style classifier that is trained to measure the accuracy of the style transfer, BLEU between input and output of a system, and BLEU between output and human-written texts. Style transfer In this work we experiment with extensions of a model, described in BIBREF6, using Texar BIBREF40 framework. To generate plausible sentences with specific semantic and stylistic features every sentence is conditioned on a representation vector $z$ which is concatenated with a particular code $c$ that specifies desired attribute, see Figure FIGREF8. Under notation introduced in BIBREF6 the base autoencoder (AE) includes a conditional probabilistic encoder $E$ defined with parameters $\theta _E$ to infer the latent representation $z$ given input $x$ Generator $G$ defined with parameters $\theta _G$ is a GRU-RNN for generating and output $\hat{x}$ defined as a sequence of tokens $\hat{x} = {\hat{x}_1, ..., \hat{x}_T}$ conditioned on the latent representation $z$ and a stylistic component $c$ that are concatenated and give rise to a generative distribution These encoder and generator form an AE with the following loss This standard reconstruction loss that drives the generator to produce realistic sentences is combined with two additional losses. The first discriminator provides extra learning signals which enforce the generator to produce coherent attributes that match the structured code in $c$. Since it is impossible to propagate gradients from the discriminator through the discrete sample $\hat{x}$, we use a deterministic continuous approximation a "soft" generated sentence, denoted as $\tilde{G} = \tilde{G}_\tau (z, c)$ with "temperature" $\tau $ set to $\tau \rightarrow 0$ as training proceeds. The resulting “soft” generated sentence is fed into the discriminator to measure the fitness to the target attribute, leading to the following loss Finally, under the assumption that each structured attribute of generated sentences is controlled through the corresponding code in $c$ and is independent from $z$ one would like to control that other not explicitly modelled attributes do not entangle with $c$. This is addressed by the dedicated loss The training objective for the baseline, shown in Figure FIGREF8, is therefore a sum of the losses from Equations (DISPLAY_FORM4) – (DISPLAY_FORM6) defined as where $\lambda _c$ and $\lambda _z$ are balancing parameters. Let us propose two further extensions of this baseline architecture. To improve reproducibility of the research the code of the studied models is open. Both extensions aim to improve the quality of information decomposition within the latent representation. In the first one, shown in Figure FIGREF12, a special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information. The loss of this discriminator is defined as Here a discriminator denoted as $D_z$ is trying to predict code $c$ using representation $z$. Combining the loss defined by Equation (DISPLAY_FORM7) with the adversarial component defined in Equation (DISPLAY_FORM10) the following learning objective is formed where $\mathcal {L}_{baseline}$ is a sum defined in Equation (DISPLAY_FORM7), $\lambda _{D_z}$ is a balancing parameter. The second extension of the baseline architecture does not use an adversarial component $D_z$ that is trying to eradicate information on $c$ from component $z$. Instead, the system, shown in Figure FIGREF16 feeds the "soft" generated sentence $\tilde{G}$ into encoder $E$ and checks how close is the representation $E(\tilde{G} )$ to the original representation $z = E(x)$ in terms of the cosine distance. We further refer to it as shifted autoencoder or SAE. Ideally, both $E(\tilde{G} (E(x), c))$ and $E(\tilde{G} (E(x), \bar{c}))$, where $\bar{c}$ denotes an inverse style code, should be both equal to $E(x)$. The loss of the shifted autoencoder is where $\lambda _{cos}$ and $\lambda _{cos^{-}}$ are two balancing parameters, with two additional terms in the loss, namely, cosine distances between the softened output processed by the encoder and the encoded original input, defined as We also study a combination of both approaches described above, shown on Figure FIGREF17. In Section SECREF4 we describe a series of experiments that we have carried out for these architectures using Yelp! reviews dataset. Experiments We have found that the baseline, as well as the proposed extensions, have noisy outcomes, when retrained from scratch, see Figure FIGREF1. Most of the papers mentioned in Section SECREF2 measure the performance of the methods proposed for the sentiment transfer with two metrics: accuracy of the external sentiment classifier measured on test data, and BLEU between the input and output that is regarded as a coarse metric for semantic similarity. In the first part of this section, we demonstrate that reporting error margins is essential for the performance assessment in terms that are prevalent in the field at the moment, i.e., BLEU between input and output and accuracy of the external sentiment classifier. In the second part, we also show that both of these two metrics after a certain threshold start to diverge from an intuitive goal of the style transfer and could be manipulated. Experiments ::: Error margins matter On Figure FIGREF1 one can see that the outcomes for every single rerun differ significantly. Namely, accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points. This variance can be partially explained with the stochasticity incurred due to sampling from the latent variables. However, we show that results for state of the art models sometimes end up within error margins from one another, so one has to report the margins to compare the results rigorously. More importantly, one can see that there is an inherent trade-off between these two performance metrics. This trade-off is not only visible across models but is also present for the same retrained architecture. Therefore, improving one of the two metrics is not enough to confidently state that one system solves the style-transfer problem better than the other. One has to report error margins after several consecutive retrains and instead of comparing one of the two metrics has to talk about Pareto-like optimization that would show confident improvement of both. To put obtained results into perspective, we have retrained every model from scratch five times in a row. We have also retrained the models of BIBREF12 five times since their code is published online. Figure FIGREF19 shows the results of all models with error margins. It is also enhanced with other self-reported results on the same Yelp! review dataset for which no code was published. One can see that error margins of the models, for which several reruns could be performed, overlap significantly. In the next subsection, we carefully study BLEU and accuracy of the external classifier and discuss their aptness to measure style transfer performance. Experiments ::: Delete, duplicate and conquer One can argue that as there is an inevitable entanglement between semantics and stylistics in natural language, there is also an apparent entanglement between BLEU of input and output and accuracy estimation of the style. Indeed, the output that copies input gives maximal BLEU yet clearly fails in terms of the style transfer. On the other hand, a wholly rephrased sentence could provide a low BLEU between input and output but high accuracy. These two issues are not problematic when both BLEU between input and output and accuracy of the transfer are relatively low. However, since style transfer methods have significantly evolved in recent years, some state of the art methods are now sensitive to these issues. The trade-off between these two metrics can be seen in Figure FIGREF1 as well as in Figure FIGREF19. As we have mentioned above, the accuracy of an external classifier and BLEU between output and input are the most widely used methods to assess the performance of style transfer at this moment. However, both of these metrics can be manipulated in a relatively simple manner. One can extend the generative architecture with internal pre-trained classifier of style and then perform the following heuristic procedure: measure the style accuracy on the output for a given batch; choose the sentences that style classifier labels as incorrect; replace them with duplicates of sentences from the given batch that have correct style according to the internal classifier and show the highest BLEU with given inputs. This way One can replace all sentences that push measured accuracy down and boost reported accuracy to 100%. To see the effect that this manipulation has on the key performance metric we split all sentences with wrong style in 10 groups of equal size and replaces them with the best possible duplicates of the stylistically correct sentences group after group. The results of this process are shown in Figure FIGREF24. This result is disconcerting. Simply replacing part of the output with duplicates of the sentences that happen to have relatively high BLEU with given inputs allows to "boost" accuracy to 100% and "improve" BLEU. The change of BLEU during such manipulation stays within error margins of the architecture, but accuracy is significantly manipulated. What is even more disturbing is that BLEU between such manipulated output of the batch and human-written reformulations provided in BIBREF12 also grows. Figure FIGREF24 shows that for SAE but all four architectures described in Section SECREF3 demonstrate similar behavior. Our experiments show that though we can manipulate BLEU between output and human-written text, it tends to change monotonically. That might be because of the fact that this metric incorporates information on stylistics and semantics of the text at the same time, preserving inevitable entanglement that we have mentioned earlier. Despite being costly, human-written reformulations are needed for future experiments with style transfer. It seems that modern architectures have reached a certain level of complexity for which naive proxy metrics such as accuracy of an external classifier or BLEU between output and input are already not enough for performance estimation and should be combined with BLEU between output and human-written texts. As the quality of style transfer grows further one has to improve the human-written data sets: for example, one would like to have data sets similar to the ones used for machine translation with several reformulations of the same sentence. On Figure FIGREF25 one can see how new proposed architectures compare with another state of the art approaches in terms of BLEU between output and human-written reformulations. Conclusion Style transfer is not a rigorously defined NLP problem. Starting from definitions of style and semantics and finishing with metrics that could be used to evaluate the performance of a proposed system. There is a surge of recent contributions that work on this problem. This paper highlights several issues connected with this lack of rigor. First, it shows that the state of the art algorithms are inherently noisy on the two most widely accepted metrics, namely, BLEU between input and output and accuracy of the external style classifier. This noise can be partially attributed to the adversarial components that are often used in the state of the art architectures and partly due to certain methodological inconsistencies in the assessment of the performance. Second, it shows that reporting error margins of several consecutive retrains for the same model is crucial for the comparison of different architectures, since error margins for some of the models overlap significantly. Finally, it demonstrates that even BLEU on human-written reformulations can be manipulated in a relatively simple way. Supplemental Material Here are some examples characteristic for different systems. An output of a system follows the input. Here are some successful examples produced by the system with additional discriminator: it's not much like an actual irish pub, which is depressing. $\rightarrow $ it's definitely much like an actual irish pub, which is grateful. i got a bagel breakfast sandwich and it was delicious! $\rightarrow $ i got a bagel breakfast sandwich and it was disgusting! i love their flavored coffee. $\rightarrow $ i dumb their flavored coffee. i got a bagel breakfast sandwich and it was delicious! $\rightarrow $ i got a bagel breakfast sandwich and it was disgusting! i love their flavored coffee. $\rightarrow $ i dumb their flavored coffee. nice selection of games to play. $\rightarrow $ typical selection of games to play. i'm not a fan of huge chain restaurants. $\rightarrow $ i'm definitely a fan of huge chain restaurants. Here are some examples of typical faulty reformulations: only now i'm really hungry, and really pissed off. $\rightarrow $ kids now i'm really hungry, and really extraordinary off. what a waste of my time and theirs. $\rightarrow $ what a wow. of my time and theirs. cooked to perfection and very flavorful. $\rightarrow $ cooked to pain and very outdated. the beer was nice and cold! $\rightarrow $ the beer was nice and consistant! corn bread was also good! $\rightarrow $ corn bread was also unethical bagged Here are some successful examples produced by the SAE: our waitress was the best, very accommodating. $\rightarrow $ our waitress was the worst, very accommodating. great food and awesome service! $\rightarrow $ horrible food and nasty service! their sandwiches were really tasty. $\rightarrow $ their sandwiches were really bland. i highly recommend the ahi tuna. $\rightarrow $ i highly hated the ahi tuna. other than that, it's great! $\rightarrow $ other than that, it's horrible! Here are some examples of typical faulty reformulations by SAE: good drinks, and good company. $\rightarrow $ 9:30 drinks, and 9:30 company. like it's been in a fridge for a week. $\rightarrow $ like it's been in a fridge for a true. save your money & your patience. $\rightarrow $ save your smile & your patience. no call, no nothing. $\rightarrow $ deliciously call, deliciously community. sounds good doesn't it? $\rightarrow $ sounds good does keeps it talented Here are some successful examples produced by the SAE with additional discriminator: best green corn tamales around. $\rightarrow $ worst green corn tamales around. she did the most amazing job. $\rightarrow $ she did the most desperate job. very friendly staff and manager. $\rightarrow $ very inconsistent staff and manager. even the water tasted horrible. $\rightarrow $ even the water tasted great. go here, you will love it. $\rightarrow $ go here, you will avoid it. Here are some examples of typical faulty reformulations by the SAE with additional discriminator: _num_ - _num_ % capacity at most , i was the only one in the pool. $\rightarrow $ sweetness - stylish % fountains at most, i was the new one in the this is pretty darn good pizza! $\rightarrow $ this is pretty darn unsafe pizza misleading enjoyed the dolly a lot. $\rightarrow $ remove the shortage a lot. so, it went in the trash. $\rightarrow $ so, it improved in the hooked. they are so fresh and yummy. $\rightarrow $ they are so bland and yummy.
accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points
2d3bf170c1647c5a95abae50ee3ef3b404230ce4
2d3bf170c1647c5a95abae50ee3ef3b404230ce4_0
Q: Which baseline methods are used? Text: Introduction Sequence-to-sequence models BIBREF0 , BIBREF1 have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT) BIBREF2 , BIBREF3 , text summarization BIBREF4 , BIBREF5 , speech recognition BIBREF6 , BIBREF7 , image captioning BIBREF8 , and conversational modeling BIBREF9 , BIBREF10 . The most popular approaches are based on an encoder-decoder architecture consisting of two recurrent neural networks (RNNs) and an attention mechanism that aligns target to source tokens BIBREF2 , BIBREF11 . The typical attention mechanism used in these architectures computes a new attention context at each decoding step based on the current state of the decoder. Intuitively, this corresponds to looking at the source sequence after the output of every single target token. Inspired by how humans process sentences, we believe it may be unnecessary to look back at the entire original source sequence at each step. We thus propose an alternative attention mechanism (section "Memory-Based Attention Model" ) that leads to smaller computational time complexity. Our method predicts $K$ attention context vectors while reading the source, and learns to use a weighted average of these vectors at each step of decoding. Thus, we avoid looking back at the source sequence once it has been encoded. We show (section "Experiments" ) that this speeds up inference while performing on-par with the standard mechanism on both toy and real-world WMT translation datasets. We also show that our mechanism leads to larger speedups as sequences get longer. Finally, by visualizing the attention scores (section "Visualizing Attention" ), we verify that the proposed technique learns meaningful alignments, and that different attention context vectors specialize on different parts of the source. Sequence-to-Sequence Model with Attention Our models are based on an encoder-decoder architecture with attention mechanism BIBREF2 , BIBREF11 . An encoder function takes as input a sequence of source tokens $\mathbf {x} = (x_1, ..., x_m)$ and produces a sequence of states $\mathbf {s} = (s_1, ..., s_m)$ .The decoder is an RNN that predicts the probability of a target sequence $\mathbf {y} = (y_1, ..., y_T \mid \mathbf {s})$ . The probability of each target token $y_i \in \lbrace 1, ... ,|V|\rbrace $ is predicted based on the recurrent state in the decoder RNN, $h_i$ , the previous words, $y_{<i}$ , and a context vector $c_i$ . The context vector $c_i$ , also referred to as the attention vector, is calculated as a weighted average of the source states. $$c_i & = \sum _{j}{\alpha _{ij} s_j} \\ {\alpha }_{i} & = \text{softmax}(f_{att}(h_i, \mathbf {s}))$$ (Eq. 3) Here, $f_{att}(h_i, \mathbf {s})$ is an attention function that calculates an unnormalized alignment score between the encoder state $s_j$ and the decoder state $h_i$ . Variants of $f_{att}$ used in BIBREF2 and BIBREF11 are: $ f_{att}(h_i, s_j)= {\left\lbrace \begin{array}{ll} v_a^T \text{tanh}(W_a[h_i, s_j]),& \emph {Bahdanau} \\ h_i^TW_as_j & \emph {Luong} \end{array}\right.} $ where $W_a$ and $v_a$ are model parameters learned to predict alignment. Let $|S|$ and $|T|$ denote the lengths of the source and target sequences respectively and $D$ denoate the state size of the encoder and decoder RNN. Such content-based attention mechanisms result in inference times of $O(D^2|S||T|)$ , as each context vector depends on the current decoder state $h_i$ and all encoder states, and requires an $O(D^2)$ matrix multiplication. The decoder outputs a distribution over a vocabulary of fixed-size $|V|$ : $$P(y_i \vert y_{<i}, \mathbf {x}) = \text{softmax}(W[s_i; c_i] + b)$$ (Eq. 5) The model is trained end-to-end by minimizing the negative log likelihood of the target words using stochastic gradient descent. Memory-Based Attention Model Our proposed model is shown in Figure 1 . During encoding, we compute an attention matrix $C \in \mathbb {R}^{K \times D}$ , where $K$ is the number of attention vectors and a hyperparameter of our method, and $D$ is the dimensionality of the top-most encoder state. This matrix is computed by predicting a score vector $\alpha _t \in \mathbb {R}^K$ at each encoding time step $t$ . $C$ is then a linear combination of the encoder states, weighted by $\alpha _t$ : $$C_k & = \sum _{t=0}^{|S|}{\alpha _{tk} s_t} \\ \alpha _t & = \text{softmax}(W_\alpha s_t) ,$$ (Eq. 7) where $W_{\alpha }$ is a parameter matrix in $\mathbb {R}^{K\times D}$ . The computational time complexity for this operation is $O(KD|S|)$ . One can think of C as compact fixed-length memory that the decoder will perform attention over. In contrast, standard approaches use a variable-length set of encoder states for attention. At each decoding step, we similarly predict $K$ scores $\beta \in \mathbb {R}^K$ . The final attention context $c$ is a linear combination of the rows in $C$ weighted by the scores. Intuitively, each decoder step predicts how important each of the $K$ attention vectors is. $$c & = \sum _{i=0}^{K}{\beta _i C_i} \\ \beta & = \text{softmax}(W_\beta h)$$ (Eq. 8) Here, $h$ is the current state of the decoder, and $W_\beta $ is a learned parameter matrix. Note that we do not access the encoder states at each decoder step. We simply take a linear combination of the attention matrix $C$ pre-computed during encoding - a much cheaper operation that is independent of the length of the source sequence. The time complexity of this computation is $O(KD|T|)$ as multiplication with the $K$ attention matrices needs to happen at each decoding step. Summing $O(KD|S|)$ from encoding and $O(KD|T|)$ from decoding, we have a total linear computational complexity of $O(KD(|S| + |T|)$ . As $D$ is typically very large, 512 or 1024 units in most applications, we expect our model to be faster than the standard attention mechanism running in $O(D^2|S||T|)$ . For long sequences (as in summarization, where |S| is large), we also expect our model to be faster than the cheaper dot-based attention mechanism, which needs $O(D|S||T|)$ computation time and requires encoder and decoder states sizes to match. We also experimented with using a sigmoid function instead of the softmax to score the encoder and decoder attention scores, resulting in 4 possible combinations. We call this choice the scoring function. A softmax scoring function calculates normalized scores, while the sigmoid scoring function results in unnormalized scores that can be understood as gates. Model Interpretations Our memory-based attention model can be understood intuitively in two ways. We can interpret it as "predicting" the set of attention contexts produced by a standard attention mechanism during encoding. To see this, assume we set $K \approx |T|$ . In this case, we predict all $|T|$ attention contexts during the encoding stage and learn to choose the right one during decoding. This is cheaper than computing contexts one-by-one based on the decoder and encoder content. In fact, we could enforce this objective by first training a regular attention model and adding a regularization term to force the memory matrix $C$ to be close to the $T\times D$ vectors computed by the standard attention. We leave it to future work to explore such an objective. Alternatively, we can interpret our mechanism as first predicting a compact $K \times D$ memory matrix, a representation of the source sequence, and then performing location-based attention on the memory by picking which row of the matrix to attend to. Standard location-based attention mechanism, by contrast, predicts a location in the source sequence to focus on BIBREF11 , BIBREF8 . Position Encodings (PE) In the above formulation, the predictions of attention contexts are symmetric. That is, $C_i$ is not forced to be different from $C_{j\ne i}$ . While we would hope for the model to learn to generate distinct attention contexts, we now present an extension that pushes the model into this direction. We add position encodings to the score matrix that forces the first few context vector $C_1, C_2, ...$ to focus on the beginning of the sequence and the last few vectors $...,C_{K-1}, C_K$ to focus on the end (thereby encouraging in-between vectors to focus on the middle). Explicitly, we multiply the score vector $\alpha $ with position encodings $l_s\in \mathbb {R}^{K}$ : $$C^{PE} & = \sum _{s=0}^{|S|}{\alpha ^{PE} h_s} \\ \alpha ^{PE}_s & = \text{softmax}(W_\alpha h_s \circ l_s)$$ (Eq. 11) To obtain $l_s$ we first calculate a constant matrix $L$ where we define each element as $$L_{ks} & = (1-k/K)(1-s/\mathcal {S})+\frac{k}{K}\frac{s}{\mathcal {S}},$$ (Eq. 12) adapting a formula from BIBREF13 . Here, $k\in \lbrace 1,2,...,K\rbrace $ is the context vector index and $\mathcal {S}$ is the maximum sequence length across all source sequences. The manifold is shown graphically in Figure 2 . We can see that earlier encoder states are upweighted in the first context vectors, and later states are upweighted in later vectors. The symmetry of the manifold and its stationary point having value 0.5 both follow from Eq. 12 . The elements of the matrix that fall beyond the sequence lengths are then masked out and the remaining elements are renormalized across the timestep dimension. This results in the jagged array of position encodings $\lbrace l_{ks}\rbrace $ . Toy Copying Experiment Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to $L$ , for $L\in \lbrace 10, 50, 100, 200\rbrace $ unique to each dataset. All models are implemented using TensorFlow based on the seq2seq implementation of BIBREF15 and trained on a single machine with a Nvidia K40m GPU. We use a 2-layer 256-unit, a bidirectional LSTM BIBREF16 encoder, a 2-layer 256-unit LSTM decoder, and 256-dimensional embeddings. For the attention baseline, we use the standard parametrized attention BIBREF2 . Dropout of 0.2 (0.8 keep probability) is applied to the input of each cell and we optimize using Adam BIBREF17 at a learning rate of 0.0001 and batch size 128. We train for at most 200,000 steps (see Figure 3 for sample learning curves). BLEU scores are calculated on tokenized data using the multi-bleu.perl script in Moses. We decode using beam search with a beam size of 10 BIBREF18 . Table 1 shows the BLEU scores of our model on different sequence lengths while varying $K$ . This is a study of the trade-off between computational time and representational power. A large $K$ allows us to compute complex source representations, while a $K$ of 1 limits the source representation to a single vector. We can see that performance consistently increases with $K$ up to a point that depends on the data length, with longer sequences requiring more complex representations. The results with and without position encodings are almost identical on the toy data. Our technique learns to fit the data as well as the standard attention mechanism despite having less representational power. Both beat the non-attention baseline by a significant margin. That we are able to represent the source sequence with a fixed size matrix with fewer than $|S|$ rows suggests that traditional attention mechanisms may be representing the source with redundancies and wasting computational resources. This makes intuitive sense for the toy task, which should require a relatively simple representation. The last column shows that our technique significantly speeds up the inference process. The gap in inference speed increases as sequences become longer. We measured inference time on the full validation set of 1,000 examples, not including data loading or model construction times. Figure 3 shows the learning curves for sequence length 200. We see that $K=1$ is unable to fit the data distribution, while $K\in \lbrace 32, 64\rbrace $ fits the data almost as quickly as the attention-based model. Figure 3 shows the effect of varying the encoder and decoder scoring functions between softmax and sigmoid. All combinations manage to fit the data, but some converge faster than others. In section "Visualizing Attention" we show that distinct alignments are learned by different function combinations. Machine Translation Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT'17 task. Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm BIBREF19 . We use newstest2015 as a validation set, and report BLEU on newstest2016. We use a similar setup to the Toy Copy task, but use 512 RNN and embedding units, train using 8 distributed workers with 1 GPU each, and train for at most 1M steps. We save checkpoints every 30 minutes during training, and choose the best based on the validation BLEU score. Table 2 compares our approach with and without position encodings, and with varying values for hyperparameter $K$ , to baseline models with regular attention mechanism. Learning curves are shown in Figure 4 . We see that our memory attention model with sufficiently high $K$ performs on-par with, or slightly better, than the attention-based baseline model despite its simpler nature. Across the board, models with $K=64$ performed better than corresponding models with $K=32$ , suggesting that using a larger number of attention vectors can capture a richer understanding of source sequences. Position encodings also seem to consistently improve model performance. Table 3 shows that our model results in faster decoding time even on a complex dataset with a large vocabulary of 16k. We measured decoding time over the full validation set, not including time used for model setup and data loading, averaged across 10 runs. The average sequence length for examples in this data was 35, and we expect more significant speedups for tasks with longer sequences, as suggested by our experiments on toy data. Note that in our NMT examples/experiments, $K\approx T$ , but we obtain computational savings from the fact that $K \ll D$ . We may be able to set $K \ll T$ , as in toy copying, and still get very good performance in other tasks. For instance, in summarization the source is complex but the representation of the source required to perform the task is "simple" (i.e. all that is needed to generate the abstract). Figure 5 shows the effect of using sigmoid and softmax function in the encoders and decoders. We found that softmax/softmax consistently performs badly, while all other combinations perform about equally well. We report results for the best combination only (as chosen on the validation set), but we found this choice to only make a minor difference. Visualizing Attention A useful property of the standard attention mechanism is that it produces meaningful alignment between source and target sequences. Often, the attention mechanism learns to progressively focus on the next source token as it decodes the target. These visualizations can be an important tool in debugging and evaluating seq2seq models and are often used for unknown token replacement. This raises the question of whether or not our proposed memory attention mechanism also learns to generate meaningful alignments. Due to limiting the number of attention contexts to a number that is generally less than the sequence length, it is not immediately obvious what each context would learn to focus on. Our hope was that the model would learn to focus on multiple alignments at the same time, within the same attention vector. For example, if the source sequence is of length 40 and we have $K=10$ attention contexts, we would hope that $C_1$ roughly focuses on tokens 1 to 4, $C_2$ on tokens 5 to 8, and so on. Figures 6 and 7 show that this is indeed the case. To generate this visualization we multiply the attention scores $\alpha $ and $\beta $ from the encoder and decoder. Figure 8 shows a sample translation task visualization. Figure 6 suggests that our model learns distinct ways to use its memory depending on the encoder and decoder functions. Interestingly, using softmax normalization results in attention maps typical of those derived from using standard attention, i.e. a relatively linear mapping between source and target tokens. Meanwhile, using sigmoid gating results in what seems to be a distributed representation of the source sequences across encoder time steps, with multiple contiguous attention contexts being accessed at each decoding step. Related Work Our contributions build on previous work in making seq2seq models more computationally efficient. BIBREF11 introduce various attention mechanisms that are computationally simpler and perform as well or better than the original one presented in BIBREF2 . However, these typically still require $O(D^2)$ computation complexity, or lack the flexibility to look at the full source sequence. Efficient location-based attention BIBREF8 has also been explored in the image recognition domain. BIBREF3 presents several enhancements to the standard seq2seq architecture that allow more efficient computation on GPUs, such as only attending on the bottom layer. BIBREF20 propose a linear time architecture based on stacked convolutional neural networks. BIBREF21 also propose the use of convolutional encoders to speed up NMT. BIBREF22 propose a linear attention mechanism based on covariance matrices applied to information retrieval. BIBREF23 enable online linear time attention calculation by enforcing that the alignment between input and output sequence elements be monotonic. Previously, monotonic attention was proposed for morphological inflection generation by BIBREF24 . Conclusion In this work, we propose a novel memory-based attention mechanism that results in a linear computational time of $O(KD(|S| + |T|))$ during decoding in seq2seq models. Through a series of experiments, we demonstrate that our technique leads to consistent inference speedups as sequences get longer, and can fit complex data distributions such as those found in Neural Machine Translation. We show that our attention mechanism learns meaningful alignments despite being constrained to a fixed representation after encoding. We encourage future work that explores the optimal values of $K$ for various language tasks and examines whether or not it is possible to predict $K$ based on the task at hand. We also encourage evaluating our models on other tasks that must deal with long sequences but have compact representations, such as summarization and question-answering, and further exploration of their effect on memory and training speed.
standard parametrized attention and a non-attention baseline
6e8c587b6562fafb43a7823637b84cd01487059a
6e8c587b6562fafb43a7823637b84cd01487059a_0
Q: How much is the BLEU score? Text: Introduction Sequence-to-sequence models BIBREF0 , BIBREF1 have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT) BIBREF2 , BIBREF3 , text summarization BIBREF4 , BIBREF5 , speech recognition BIBREF6 , BIBREF7 , image captioning BIBREF8 , and conversational modeling BIBREF9 , BIBREF10 . The most popular approaches are based on an encoder-decoder architecture consisting of two recurrent neural networks (RNNs) and an attention mechanism that aligns target to source tokens BIBREF2 , BIBREF11 . The typical attention mechanism used in these architectures computes a new attention context at each decoding step based on the current state of the decoder. Intuitively, this corresponds to looking at the source sequence after the output of every single target token. Inspired by how humans process sentences, we believe it may be unnecessary to look back at the entire original source sequence at each step. We thus propose an alternative attention mechanism (section "Memory-Based Attention Model" ) that leads to smaller computational time complexity. Our method predicts $K$ attention context vectors while reading the source, and learns to use a weighted average of these vectors at each step of decoding. Thus, we avoid looking back at the source sequence once it has been encoded. We show (section "Experiments" ) that this speeds up inference while performing on-par with the standard mechanism on both toy and real-world WMT translation datasets. We also show that our mechanism leads to larger speedups as sequences get longer. Finally, by visualizing the attention scores (section "Visualizing Attention" ), we verify that the proposed technique learns meaningful alignments, and that different attention context vectors specialize on different parts of the source. Sequence-to-Sequence Model with Attention Our models are based on an encoder-decoder architecture with attention mechanism BIBREF2 , BIBREF11 . An encoder function takes as input a sequence of source tokens $\mathbf {x} = (x_1, ..., x_m)$ and produces a sequence of states $\mathbf {s} = (s_1, ..., s_m)$ .The decoder is an RNN that predicts the probability of a target sequence $\mathbf {y} = (y_1, ..., y_T \mid \mathbf {s})$ . The probability of each target token $y_i \in \lbrace 1, ... ,|V|\rbrace $ is predicted based on the recurrent state in the decoder RNN, $h_i$ , the previous words, $y_{<i}$ , and a context vector $c_i$ . The context vector $c_i$ , also referred to as the attention vector, is calculated as a weighted average of the source states. $$c_i & = \sum _{j}{\alpha _{ij} s_j} \\ {\alpha }_{i} & = \text{softmax}(f_{att}(h_i, \mathbf {s}))$$ (Eq. 3) Here, $f_{att}(h_i, \mathbf {s})$ is an attention function that calculates an unnormalized alignment score between the encoder state $s_j$ and the decoder state $h_i$ . Variants of $f_{att}$ used in BIBREF2 and BIBREF11 are: $ f_{att}(h_i, s_j)= {\left\lbrace \begin{array}{ll} v_a^T \text{tanh}(W_a[h_i, s_j]),& \emph {Bahdanau} \\ h_i^TW_as_j & \emph {Luong} \end{array}\right.} $ where $W_a$ and $v_a$ are model parameters learned to predict alignment. Let $|S|$ and $|T|$ denote the lengths of the source and target sequences respectively and $D$ denoate the state size of the encoder and decoder RNN. Such content-based attention mechanisms result in inference times of $O(D^2|S||T|)$ , as each context vector depends on the current decoder state $h_i$ and all encoder states, and requires an $O(D^2)$ matrix multiplication. The decoder outputs a distribution over a vocabulary of fixed-size $|V|$ : $$P(y_i \vert y_{<i}, \mathbf {x}) = \text{softmax}(W[s_i; c_i] + b)$$ (Eq. 5) The model is trained end-to-end by minimizing the negative log likelihood of the target words using stochastic gradient descent. Memory-Based Attention Model Our proposed model is shown in Figure 1 . During encoding, we compute an attention matrix $C \in \mathbb {R}^{K \times D}$ , where $K$ is the number of attention vectors and a hyperparameter of our method, and $D$ is the dimensionality of the top-most encoder state. This matrix is computed by predicting a score vector $\alpha _t \in \mathbb {R}^K$ at each encoding time step $t$ . $C$ is then a linear combination of the encoder states, weighted by $\alpha _t$ : $$C_k & = \sum _{t=0}^{|S|}{\alpha _{tk} s_t} \\ \alpha _t & = \text{softmax}(W_\alpha s_t) ,$$ (Eq. 7) where $W_{\alpha }$ is a parameter matrix in $\mathbb {R}^{K\times D}$ . The computational time complexity for this operation is $O(KD|S|)$ . One can think of C as compact fixed-length memory that the decoder will perform attention over. In contrast, standard approaches use a variable-length set of encoder states for attention. At each decoding step, we similarly predict $K$ scores $\beta \in \mathbb {R}^K$ . The final attention context $c$ is a linear combination of the rows in $C$ weighted by the scores. Intuitively, each decoder step predicts how important each of the $K$ attention vectors is. $$c & = \sum _{i=0}^{K}{\beta _i C_i} \\ \beta & = \text{softmax}(W_\beta h)$$ (Eq. 8) Here, $h$ is the current state of the decoder, and $W_\beta $ is a learned parameter matrix. Note that we do not access the encoder states at each decoder step. We simply take a linear combination of the attention matrix $C$ pre-computed during encoding - a much cheaper operation that is independent of the length of the source sequence. The time complexity of this computation is $O(KD|T|)$ as multiplication with the $K$ attention matrices needs to happen at each decoding step. Summing $O(KD|S|)$ from encoding and $O(KD|T|)$ from decoding, we have a total linear computational complexity of $O(KD(|S| + |T|)$ . As $D$ is typically very large, 512 or 1024 units in most applications, we expect our model to be faster than the standard attention mechanism running in $O(D^2|S||T|)$ . For long sequences (as in summarization, where |S| is large), we also expect our model to be faster than the cheaper dot-based attention mechanism, which needs $O(D|S||T|)$ computation time and requires encoder and decoder states sizes to match. We also experimented with using a sigmoid function instead of the softmax to score the encoder and decoder attention scores, resulting in 4 possible combinations. We call this choice the scoring function. A softmax scoring function calculates normalized scores, while the sigmoid scoring function results in unnormalized scores that can be understood as gates. Model Interpretations Our memory-based attention model can be understood intuitively in two ways. We can interpret it as "predicting" the set of attention contexts produced by a standard attention mechanism during encoding. To see this, assume we set $K \approx |T|$ . In this case, we predict all $|T|$ attention contexts during the encoding stage and learn to choose the right one during decoding. This is cheaper than computing contexts one-by-one based on the decoder and encoder content. In fact, we could enforce this objective by first training a regular attention model and adding a regularization term to force the memory matrix $C$ to be close to the $T\times D$ vectors computed by the standard attention. We leave it to future work to explore such an objective. Alternatively, we can interpret our mechanism as first predicting a compact $K \times D$ memory matrix, a representation of the source sequence, and then performing location-based attention on the memory by picking which row of the matrix to attend to. Standard location-based attention mechanism, by contrast, predicts a location in the source sequence to focus on BIBREF11 , BIBREF8 . Position Encodings (PE) In the above formulation, the predictions of attention contexts are symmetric. That is, $C_i$ is not forced to be different from $C_{j\ne i}$ . While we would hope for the model to learn to generate distinct attention contexts, we now present an extension that pushes the model into this direction. We add position encodings to the score matrix that forces the first few context vector $C_1, C_2, ...$ to focus on the beginning of the sequence and the last few vectors $...,C_{K-1}, C_K$ to focus on the end (thereby encouraging in-between vectors to focus on the middle). Explicitly, we multiply the score vector $\alpha $ with position encodings $l_s\in \mathbb {R}^{K}$ : $$C^{PE} & = \sum _{s=0}^{|S|}{\alpha ^{PE} h_s} \\ \alpha ^{PE}_s & = \text{softmax}(W_\alpha h_s \circ l_s)$$ (Eq. 11) To obtain $l_s$ we first calculate a constant matrix $L$ where we define each element as $$L_{ks} & = (1-k/K)(1-s/\mathcal {S})+\frac{k}{K}\frac{s}{\mathcal {S}},$$ (Eq. 12) adapting a formula from BIBREF13 . Here, $k\in \lbrace 1,2,...,K\rbrace $ is the context vector index and $\mathcal {S}$ is the maximum sequence length across all source sequences. The manifold is shown graphically in Figure 2 . We can see that earlier encoder states are upweighted in the first context vectors, and later states are upweighted in later vectors. The symmetry of the manifold and its stationary point having value 0.5 both follow from Eq. 12 . The elements of the matrix that fall beyond the sequence lengths are then masked out and the remaining elements are renormalized across the timestep dimension. This results in the jagged array of position encodings $\lbrace l_{ks}\rbrace $ . Toy Copying Experiment Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to $L$ , for $L\in \lbrace 10, 50, 100, 200\rbrace $ unique to each dataset. All models are implemented using TensorFlow based on the seq2seq implementation of BIBREF15 and trained on a single machine with a Nvidia K40m GPU. We use a 2-layer 256-unit, a bidirectional LSTM BIBREF16 encoder, a 2-layer 256-unit LSTM decoder, and 256-dimensional embeddings. For the attention baseline, we use the standard parametrized attention BIBREF2 . Dropout of 0.2 (0.8 keep probability) is applied to the input of each cell and we optimize using Adam BIBREF17 at a learning rate of 0.0001 and batch size 128. We train for at most 200,000 steps (see Figure 3 for sample learning curves). BLEU scores are calculated on tokenized data using the multi-bleu.perl script in Moses. We decode using beam search with a beam size of 10 BIBREF18 . Table 1 shows the BLEU scores of our model on different sequence lengths while varying $K$ . This is a study of the trade-off between computational time and representational power. A large $K$ allows us to compute complex source representations, while a $K$ of 1 limits the source representation to a single vector. We can see that performance consistently increases with $K$ up to a point that depends on the data length, with longer sequences requiring more complex representations. The results with and without position encodings are almost identical on the toy data. Our technique learns to fit the data as well as the standard attention mechanism despite having less representational power. Both beat the non-attention baseline by a significant margin. That we are able to represent the source sequence with a fixed size matrix with fewer than $|S|$ rows suggests that traditional attention mechanisms may be representing the source with redundancies and wasting computational resources. This makes intuitive sense for the toy task, which should require a relatively simple representation. The last column shows that our technique significantly speeds up the inference process. The gap in inference speed increases as sequences become longer. We measured inference time on the full validation set of 1,000 examples, not including data loading or model construction times. Figure 3 shows the learning curves for sequence length 200. We see that $K=1$ is unable to fit the data distribution, while $K\in \lbrace 32, 64\rbrace $ fits the data almost as quickly as the attention-based model. Figure 3 shows the effect of varying the encoder and decoder scoring functions between softmax and sigmoid. All combinations manage to fit the data, but some converge faster than others. In section "Visualizing Attention" we show that distinct alignments are learned by different function combinations. Machine Translation Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT'17 task. Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm BIBREF19 . We use newstest2015 as a validation set, and report BLEU on newstest2016. We use a similar setup to the Toy Copy task, but use 512 RNN and embedding units, train using 8 distributed workers with 1 GPU each, and train for at most 1M steps. We save checkpoints every 30 minutes during training, and choose the best based on the validation BLEU score. Table 2 compares our approach with and without position encodings, and with varying values for hyperparameter $K$ , to baseline models with regular attention mechanism. Learning curves are shown in Figure 4 . We see that our memory attention model with sufficiently high $K$ performs on-par with, or slightly better, than the attention-based baseline model despite its simpler nature. Across the board, models with $K=64$ performed better than corresponding models with $K=32$ , suggesting that using a larger number of attention vectors can capture a richer understanding of source sequences. Position encodings also seem to consistently improve model performance. Table 3 shows that our model results in faster decoding time even on a complex dataset with a large vocabulary of 16k. We measured decoding time over the full validation set, not including time used for model setup and data loading, averaged across 10 runs. The average sequence length for examples in this data was 35, and we expect more significant speedups for tasks with longer sequences, as suggested by our experiments on toy data. Note that in our NMT examples/experiments, $K\approx T$ , but we obtain computational savings from the fact that $K \ll D$ . We may be able to set $K \ll T$ , as in toy copying, and still get very good performance in other tasks. For instance, in summarization the source is complex but the representation of the source required to perform the task is "simple" (i.e. all that is needed to generate the abstract). Figure 5 shows the effect of using sigmoid and softmax function in the encoders and decoders. We found that softmax/softmax consistently performs badly, while all other combinations perform about equally well. We report results for the best combination only (as chosen on the validation set), but we found this choice to only make a minor difference. Visualizing Attention A useful property of the standard attention mechanism is that it produces meaningful alignment between source and target sequences. Often, the attention mechanism learns to progressively focus on the next source token as it decodes the target. These visualizations can be an important tool in debugging and evaluating seq2seq models and are often used for unknown token replacement. This raises the question of whether or not our proposed memory attention mechanism also learns to generate meaningful alignments. Due to limiting the number of attention contexts to a number that is generally less than the sequence length, it is not immediately obvious what each context would learn to focus on. Our hope was that the model would learn to focus on multiple alignments at the same time, within the same attention vector. For example, if the source sequence is of length 40 and we have $K=10$ attention contexts, we would hope that $C_1$ roughly focuses on tokens 1 to 4, $C_2$ on tokens 5 to 8, and so on. Figures 6 and 7 show that this is indeed the case. To generate this visualization we multiply the attention scores $\alpha $ and $\beta $ from the encoder and decoder. Figure 8 shows a sample translation task visualization. Figure 6 suggests that our model learns distinct ways to use its memory depending on the encoder and decoder functions. Interestingly, using softmax normalization results in attention maps typical of those derived from using standard attention, i.e. a relatively linear mapping between source and target tokens. Meanwhile, using sigmoid gating results in what seems to be a distributed representation of the source sequences across encoder time steps, with multiple contiguous attention contexts being accessed at each decoding step. Related Work Our contributions build on previous work in making seq2seq models more computationally efficient. BIBREF11 introduce various attention mechanisms that are computationally simpler and perform as well or better than the original one presented in BIBREF2 . However, these typically still require $O(D^2)$ computation complexity, or lack the flexibility to look at the full source sequence. Efficient location-based attention BIBREF8 has also been explored in the image recognition domain. BIBREF3 presents several enhancements to the standard seq2seq architecture that allow more efficient computation on GPUs, such as only attending on the bottom layer. BIBREF20 propose a linear time architecture based on stacked convolutional neural networks. BIBREF21 also propose the use of convolutional encoders to speed up NMT. BIBREF22 propose a linear attention mechanism based on covariance matrices applied to information retrieval. BIBREF23 enable online linear time attention calculation by enforcing that the alignment between input and output sequence elements be monotonic. Previously, monotonic attention was proposed for morphological inflection generation by BIBREF24 . Conclusion In this work, we propose a novel memory-based attention mechanism that results in a linear computational time of $O(KD(|S| + |T|))$ during decoding in seq2seq models. Through a series of experiments, we demonstrate that our technique leads to consistent inference speedups as sequences get longer, and can fit complex data distributions such as those found in Neural Machine Translation. We show that our attention mechanism learns meaningful alignments despite being constrained to a fixed representation after encoding. We encourage future work that explores the optimal values of $K$ for various language tasks and examines whether or not it is possible to predict $K$ based on the task at hand. We also encourage evaluating our models on other tasks that must deal with long sequences but have compact representations, such as summarization and question-answering, and further exploration of their effect on memory and training speed.
Ranges from 44.22 to 100.00 depending on K and the sequence length.
ab9453fa2b927c97b60b06aeda944ac5c1bfef1e
ab9453fa2b927c97b60b06aeda944ac5c1bfef1e_0
Q: Which datasets are used in experiments? Text: Introduction Sequence-to-sequence models BIBREF0 , BIBREF1 have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT) BIBREF2 , BIBREF3 , text summarization BIBREF4 , BIBREF5 , speech recognition BIBREF6 , BIBREF7 , image captioning BIBREF8 , and conversational modeling BIBREF9 , BIBREF10 . The most popular approaches are based on an encoder-decoder architecture consisting of two recurrent neural networks (RNNs) and an attention mechanism that aligns target to source tokens BIBREF2 , BIBREF11 . The typical attention mechanism used in these architectures computes a new attention context at each decoding step based on the current state of the decoder. Intuitively, this corresponds to looking at the source sequence after the output of every single target token. Inspired by how humans process sentences, we believe it may be unnecessary to look back at the entire original source sequence at each step. We thus propose an alternative attention mechanism (section "Memory-Based Attention Model" ) that leads to smaller computational time complexity. Our method predicts $K$ attention context vectors while reading the source, and learns to use a weighted average of these vectors at each step of decoding. Thus, we avoid looking back at the source sequence once it has been encoded. We show (section "Experiments" ) that this speeds up inference while performing on-par with the standard mechanism on both toy and real-world WMT translation datasets. We also show that our mechanism leads to larger speedups as sequences get longer. Finally, by visualizing the attention scores (section "Visualizing Attention" ), we verify that the proposed technique learns meaningful alignments, and that different attention context vectors specialize on different parts of the source. Sequence-to-Sequence Model with Attention Our models are based on an encoder-decoder architecture with attention mechanism BIBREF2 , BIBREF11 . An encoder function takes as input a sequence of source tokens $\mathbf {x} = (x_1, ..., x_m)$ and produces a sequence of states $\mathbf {s} = (s_1, ..., s_m)$ .The decoder is an RNN that predicts the probability of a target sequence $\mathbf {y} = (y_1, ..., y_T \mid \mathbf {s})$ . The probability of each target token $y_i \in \lbrace 1, ... ,|V|\rbrace $ is predicted based on the recurrent state in the decoder RNN, $h_i$ , the previous words, $y_{<i}$ , and a context vector $c_i$ . The context vector $c_i$ , also referred to as the attention vector, is calculated as a weighted average of the source states. $$c_i & = \sum _{j}{\alpha _{ij} s_j} \\ {\alpha }_{i} & = \text{softmax}(f_{att}(h_i, \mathbf {s}))$$ (Eq. 3) Here, $f_{att}(h_i, \mathbf {s})$ is an attention function that calculates an unnormalized alignment score between the encoder state $s_j$ and the decoder state $h_i$ . Variants of $f_{att}$ used in BIBREF2 and BIBREF11 are: $ f_{att}(h_i, s_j)= {\left\lbrace \begin{array}{ll} v_a^T \text{tanh}(W_a[h_i, s_j]),& \emph {Bahdanau} \\ h_i^TW_as_j & \emph {Luong} \end{array}\right.} $ where $W_a$ and $v_a$ are model parameters learned to predict alignment. Let $|S|$ and $|T|$ denote the lengths of the source and target sequences respectively and $D$ denoate the state size of the encoder and decoder RNN. Such content-based attention mechanisms result in inference times of $O(D^2|S||T|)$ , as each context vector depends on the current decoder state $h_i$ and all encoder states, and requires an $O(D^2)$ matrix multiplication. The decoder outputs a distribution over a vocabulary of fixed-size $|V|$ : $$P(y_i \vert y_{<i}, \mathbf {x}) = \text{softmax}(W[s_i; c_i] + b)$$ (Eq. 5) The model is trained end-to-end by minimizing the negative log likelihood of the target words using stochastic gradient descent. Memory-Based Attention Model Our proposed model is shown in Figure 1 . During encoding, we compute an attention matrix $C \in \mathbb {R}^{K \times D}$ , where $K$ is the number of attention vectors and a hyperparameter of our method, and $D$ is the dimensionality of the top-most encoder state. This matrix is computed by predicting a score vector $\alpha _t \in \mathbb {R}^K$ at each encoding time step $t$ . $C$ is then a linear combination of the encoder states, weighted by $\alpha _t$ : $$C_k & = \sum _{t=0}^{|S|}{\alpha _{tk} s_t} \\ \alpha _t & = \text{softmax}(W_\alpha s_t) ,$$ (Eq. 7) where $W_{\alpha }$ is a parameter matrix in $\mathbb {R}^{K\times D}$ . The computational time complexity for this operation is $O(KD|S|)$ . One can think of C as compact fixed-length memory that the decoder will perform attention over. In contrast, standard approaches use a variable-length set of encoder states for attention. At each decoding step, we similarly predict $K$ scores $\beta \in \mathbb {R}^K$ . The final attention context $c$ is a linear combination of the rows in $C$ weighted by the scores. Intuitively, each decoder step predicts how important each of the $K$ attention vectors is. $$c & = \sum _{i=0}^{K}{\beta _i C_i} \\ \beta & = \text{softmax}(W_\beta h)$$ (Eq. 8) Here, $h$ is the current state of the decoder, and $W_\beta $ is a learned parameter matrix. Note that we do not access the encoder states at each decoder step. We simply take a linear combination of the attention matrix $C$ pre-computed during encoding - a much cheaper operation that is independent of the length of the source sequence. The time complexity of this computation is $O(KD|T|)$ as multiplication with the $K$ attention matrices needs to happen at each decoding step. Summing $O(KD|S|)$ from encoding and $O(KD|T|)$ from decoding, we have a total linear computational complexity of $O(KD(|S| + |T|)$ . As $D$ is typically very large, 512 or 1024 units in most applications, we expect our model to be faster than the standard attention mechanism running in $O(D^2|S||T|)$ . For long sequences (as in summarization, where |S| is large), we also expect our model to be faster than the cheaper dot-based attention mechanism, which needs $O(D|S||T|)$ computation time and requires encoder and decoder states sizes to match. We also experimented with using a sigmoid function instead of the softmax to score the encoder and decoder attention scores, resulting in 4 possible combinations. We call this choice the scoring function. A softmax scoring function calculates normalized scores, while the sigmoid scoring function results in unnormalized scores that can be understood as gates. Model Interpretations Our memory-based attention model can be understood intuitively in two ways. We can interpret it as "predicting" the set of attention contexts produced by a standard attention mechanism during encoding. To see this, assume we set $K \approx |T|$ . In this case, we predict all $|T|$ attention contexts during the encoding stage and learn to choose the right one during decoding. This is cheaper than computing contexts one-by-one based on the decoder and encoder content. In fact, we could enforce this objective by first training a regular attention model and adding a regularization term to force the memory matrix $C$ to be close to the $T\times D$ vectors computed by the standard attention. We leave it to future work to explore such an objective. Alternatively, we can interpret our mechanism as first predicting a compact $K \times D$ memory matrix, a representation of the source sequence, and then performing location-based attention on the memory by picking which row of the matrix to attend to. Standard location-based attention mechanism, by contrast, predicts a location in the source sequence to focus on BIBREF11 , BIBREF8 . Position Encodings (PE) In the above formulation, the predictions of attention contexts are symmetric. That is, $C_i$ is not forced to be different from $C_{j\ne i}$ . While we would hope for the model to learn to generate distinct attention contexts, we now present an extension that pushes the model into this direction. We add position encodings to the score matrix that forces the first few context vector $C_1, C_2, ...$ to focus on the beginning of the sequence and the last few vectors $...,C_{K-1}, C_K$ to focus on the end (thereby encouraging in-between vectors to focus on the middle). Explicitly, we multiply the score vector $\alpha $ with position encodings $l_s\in \mathbb {R}^{K}$ : $$C^{PE} & = \sum _{s=0}^{|S|}{\alpha ^{PE} h_s} \\ \alpha ^{PE}_s & = \text{softmax}(W_\alpha h_s \circ l_s)$$ (Eq. 11) To obtain $l_s$ we first calculate a constant matrix $L$ where we define each element as $$L_{ks} & = (1-k/K)(1-s/\mathcal {S})+\frac{k}{K}\frac{s}{\mathcal {S}},$$ (Eq. 12) adapting a formula from BIBREF13 . Here, $k\in \lbrace 1,2,...,K\rbrace $ is the context vector index and $\mathcal {S}$ is the maximum sequence length across all source sequences. The manifold is shown graphically in Figure 2 . We can see that earlier encoder states are upweighted in the first context vectors, and later states are upweighted in later vectors. The symmetry of the manifold and its stationary point having value 0.5 both follow from Eq. 12 . The elements of the matrix that fall beyond the sequence lengths are then masked out and the remaining elements are renormalized across the timestep dimension. This results in the jagged array of position encodings $\lbrace l_{ks}\rbrace $ . Toy Copying Experiment Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to $L$ , for $L\in \lbrace 10, 50, 100, 200\rbrace $ unique to each dataset. All models are implemented using TensorFlow based on the seq2seq implementation of BIBREF15 and trained on a single machine with a Nvidia K40m GPU. We use a 2-layer 256-unit, a bidirectional LSTM BIBREF16 encoder, a 2-layer 256-unit LSTM decoder, and 256-dimensional embeddings. For the attention baseline, we use the standard parametrized attention BIBREF2 . Dropout of 0.2 (0.8 keep probability) is applied to the input of each cell and we optimize using Adam BIBREF17 at a learning rate of 0.0001 and batch size 128. We train for at most 200,000 steps (see Figure 3 for sample learning curves). BLEU scores are calculated on tokenized data using the multi-bleu.perl script in Moses. We decode using beam search with a beam size of 10 BIBREF18 . Table 1 shows the BLEU scores of our model on different sequence lengths while varying $K$ . This is a study of the trade-off between computational time and representational power. A large $K$ allows us to compute complex source representations, while a $K$ of 1 limits the source representation to a single vector. We can see that performance consistently increases with $K$ up to a point that depends on the data length, with longer sequences requiring more complex representations. The results with and without position encodings are almost identical on the toy data. Our technique learns to fit the data as well as the standard attention mechanism despite having less representational power. Both beat the non-attention baseline by a significant margin. That we are able to represent the source sequence with a fixed size matrix with fewer than $|S|$ rows suggests that traditional attention mechanisms may be representing the source with redundancies and wasting computational resources. This makes intuitive sense for the toy task, which should require a relatively simple representation. The last column shows that our technique significantly speeds up the inference process. The gap in inference speed increases as sequences become longer. We measured inference time on the full validation set of 1,000 examples, not including data loading or model construction times. Figure 3 shows the learning curves for sequence length 200. We see that $K=1$ is unable to fit the data distribution, while $K\in \lbrace 32, 64\rbrace $ fits the data almost as quickly as the attention-based model. Figure 3 shows the effect of varying the encoder and decoder scoring functions between softmax and sigmoid. All combinations manage to fit the data, but some converge faster than others. In section "Visualizing Attention" we show that distinct alignments are learned by different function combinations. Machine Translation Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT'17 task. Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm BIBREF19 . We use newstest2015 as a validation set, and report BLEU on newstest2016. We use a similar setup to the Toy Copy task, but use 512 RNN and embedding units, train using 8 distributed workers with 1 GPU each, and train for at most 1M steps. We save checkpoints every 30 minutes during training, and choose the best based on the validation BLEU score. Table 2 compares our approach with and without position encodings, and with varying values for hyperparameter $K$ , to baseline models with regular attention mechanism. Learning curves are shown in Figure 4 . We see that our memory attention model with sufficiently high $K$ performs on-par with, or slightly better, than the attention-based baseline model despite its simpler nature. Across the board, models with $K=64$ performed better than corresponding models with $K=32$ , suggesting that using a larger number of attention vectors can capture a richer understanding of source sequences. Position encodings also seem to consistently improve model performance. Table 3 shows that our model results in faster decoding time even on a complex dataset with a large vocabulary of 16k. We measured decoding time over the full validation set, not including time used for model setup and data loading, averaged across 10 runs. The average sequence length for examples in this data was 35, and we expect more significant speedups for tasks with longer sequences, as suggested by our experiments on toy data. Note that in our NMT examples/experiments, $K\approx T$ , but we obtain computational savings from the fact that $K \ll D$ . We may be able to set $K \ll T$ , as in toy copying, and still get very good performance in other tasks. For instance, in summarization the source is complex but the representation of the source required to perform the task is "simple" (i.e. all that is needed to generate the abstract). Figure 5 shows the effect of using sigmoid and softmax function in the encoders and decoders. We found that softmax/softmax consistently performs badly, while all other combinations perform about equally well. We report results for the best combination only (as chosen on the validation set), but we found this choice to only make a minor difference. Visualizing Attention A useful property of the standard attention mechanism is that it produces meaningful alignment between source and target sequences. Often, the attention mechanism learns to progressively focus on the next source token as it decodes the target. These visualizations can be an important tool in debugging and evaluating seq2seq models and are often used for unknown token replacement. This raises the question of whether or not our proposed memory attention mechanism also learns to generate meaningful alignments. Due to limiting the number of attention contexts to a number that is generally less than the sequence length, it is not immediately obvious what each context would learn to focus on. Our hope was that the model would learn to focus on multiple alignments at the same time, within the same attention vector. For example, if the source sequence is of length 40 and we have $K=10$ attention contexts, we would hope that $C_1$ roughly focuses on tokens 1 to 4, $C_2$ on tokens 5 to 8, and so on. Figures 6 and 7 show that this is indeed the case. To generate this visualization we multiply the attention scores $\alpha $ and $\beta $ from the encoder and decoder. Figure 8 shows a sample translation task visualization. Figure 6 suggests that our model learns distinct ways to use its memory depending on the encoder and decoder functions. Interestingly, using softmax normalization results in attention maps typical of those derived from using standard attention, i.e. a relatively linear mapping between source and target tokens. Meanwhile, using sigmoid gating results in what seems to be a distributed representation of the source sequences across encoder time steps, with multiple contiguous attention contexts being accessed at each decoding step. Related Work Our contributions build on previous work in making seq2seq models more computationally efficient. BIBREF11 introduce various attention mechanisms that are computationally simpler and perform as well or better than the original one presented in BIBREF2 . However, these typically still require $O(D^2)$ computation complexity, or lack the flexibility to look at the full source sequence. Efficient location-based attention BIBREF8 has also been explored in the image recognition domain. BIBREF3 presents several enhancements to the standard seq2seq architecture that allow more efficient computation on GPUs, such as only attending on the bottom layer. BIBREF20 propose a linear time architecture based on stacked convolutional neural networks. BIBREF21 also propose the use of convolutional encoders to speed up NMT. BIBREF22 propose a linear attention mechanism based on covariance matrices applied to information retrieval. BIBREF23 enable online linear time attention calculation by enforcing that the alignment between input and output sequence elements be monotonic. Previously, monotonic attention was proposed for morphological inflection generation by BIBREF24 . Conclusion In this work, we propose a novel memory-based attention mechanism that results in a linear computational time of $O(KD(|S| + |T|))$ during decoding in seq2seq models. Through a series of experiments, we demonstrate that our technique leads to consistent inference speedups as sequences get longer, and can fit complex data distributions such as those found in Neural Machine Translation. We show that our attention mechanism learns meaningful alignments despite being constrained to a fixed representation after encoding. We encourage future work that explores the optimal values of $K$ for various language tasks and examines whether or not it is possible to predict $K$ based on the task at hand. We also encourage evaluating our models on other tasks that must deal with long sequences but have compact representations, such as summarization and question-answering, and further exploration of their effect on memory and training speed.
Sequence Copy Task and WMT'17
3a8d65eb8e1dbb995981a0e02d86ebf3feab107a
3a8d65eb8e1dbb995981a0e02d86ebf3feab107a_0
Q: What regularizers were used to encourage consistency in back translation cycles? Text: Introduction Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9. Recent research has attempted to induce unsupervised bilingual lexicons by aligning monolingual word vector spaces BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Given a pair of languages, their word alignment is inherently a bi-directional problem (e.g. English-Italian vs Italian-English). However, most existing research considers mapping from one language to another without making use of symmetry. Our experiments show that separately learned UBLI models are not always consistent in opposite directions. As shown in Figure 1a, when the model of BIBREF11 Conneau18a is applied to English and Italian, the primal model maps the word “three” to the Italian word “tre”, but the dual model maps “tre” to “two” instead of “three”. We propose to address this issue by exploiting duality, encouraging forward and backward mappings to form a closed loop (Figure 1b). In particular, we extend the model of BIBREF11 Conneau18a by using a cycle consistency loss BIBREF16 to regularize two models in opposite directions. Experiments on two benchmark datasets show that the simple method of enforcing consistency gives better results in both directions. Our model significantly outperforms competitive baselines, obtaining the best published results. We release our code at xxx. Related Work UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets. Cycle Consistency. Forward-backward consistency has been used to discover the correspondence between unpaired images BIBREF21, BIBREF22. In machine translation, similar ideas were exploited, BIBREF23, BIBREF24 and BIBREF25 use dual learning to train two “opposite” language translators by minimizing the reconstruction loss. BIBREF26 consider back-translation, where a backward model is used to build synthetic parallel corpus and a forward model learns to generate genuine text based on the synthetic output. Closer to our method, BIBREF27 jointly train two autoencoders to learn supervised bilingual word embeddings. BIBREF28 use sinkhorn distance BIBREF29 and back-translation to align word embeddings. However, they cannot perform fully unsupervised training, relying on WGAN BIBREF30 for providing initial mappings. Concurrent with our work, BIBREF31 build a adversarial autoencoder with cycle consistency loss and post-cycle reconstruction loss. In contrast to these works, our method is fully unsupervised, simpler, and empirically more effective. Approach We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\lbrace x_1,...,x_n\rbrace $ and $Y=\lbrace y_1,...,y_m\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\mathcal {F}:X\rightarrow Y$ such that for each $x_i$, $\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\mathcal {G}:Y\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings. Approach ::: Baseline Adversarial Model BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\mathcal {F}$ tries to generate “fake” word embeddings $\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\mathcal {F}}$max$_{D{_y}}\ell _{adv}(\mathcal {F},D_y,X,Y)$, where $P_{D_y}(src|y_j)$ is a model probability from $D_y$ to distinguish whether word embedding $y_j$ is coming from the target language (src = 1) or the primal mapping $\mathcal {F}$ (src = 0). Similarly, the dual UBLI problem can be formulated as min$_{\mathcal {G}}$max$_{D_x}\ell _{adv}(\mathcal {G},D_x,Y,X)$, where $\mathcal {G}$ is the dual mapping, and $D_x$ is a source discriminator. Theoretically, a unique solution for above minmax game exists, with the mapping and the discriminator reaching a nash equilibrium. Since the adversarial training happens at the distribution level, no cross-lingual supervision is required. Approach ::: Regularizers for Dual Models We train $\mathcal {F}$ and $\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\mathcal {G}(\mathcal {F}(X))$ is similar to $X$ and $\mathcal {F}(\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\ell _{cycle}$) on each side to avoid $\mathcal {F}$ and $\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4. Cycle Consistency Loss. We introduce where $\Delta $ denotes the discrepancy criterion, which is set as the average cosine similarity in our model. Full objective. The final objective is: Approach ::: Model Selection We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores: Where $\lambda $ is a hyperparameter to control the importance of the two objectives. Here $S$ first generates bilingual lexicons by learned mappings, and then computes the average cosine similarity of these translations. Experiments We perform two sets of experiments, to investigate the effectiveness of our duality regularization in isolation (Section SECREF16) and to compare our final models with the state-of-the-art methods in the literature (Section SECREF18), respectively. Experiments ::: Experimental Settings Dataset and Setup. Our datasets includes: (i) The Multilingual Unsupervised and Supervised Embeddings (MUSE) dataset released by BIBREF11 Conneau18a. (ii) the more challenging Vecmap dataset from BIBREF32 Dinu15 and the extensions of BIBREF33 Artetxe17ACL. We follow the evaluation setups of BIBREF11, utilizing cross-domain similarity local scaling (CSLS) for retrieving the translation of given source words. Following a standard evaluation practice BIBREF34, BIBREF35, BIBREF11, we report precision at 1 scores (P@1). Given the instability of existing methods, we follow BIBREF13 to perform 10 runs for each method and report the best and the average accuracies. Experiments ::: The Effectiveness of Dual Learning We compare our method with BIBREF11 (Adv-C) under the same settings. As shown in Table TABREF12, our model outperforms Adv-C on both MUSE and Vecmap for all language pairs (except ES-EN). In addition, the proposed approach is less sensitive to initialization, and thus more stable than Adv-C over multiple runs. These results demonstrate the effectiveness of dual learning. Our method is also superior to Adv-C for the low-resource language pairs English $\leftrightarrow $ Malay (MS) and English $\leftrightarrow $ English-Esperanto (EO). Adv-C gives low performances on ES-EN, DE-EN, but much better results on the opposite directions on Vecmap. This is likely because the separate models are highly under-constrained, and thus easy to get stuck in poor local optima. In contrast, our method gives comparable results on both directions for the two languages, thanks to the use of information symmetry. Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization. Experiments ::: Comparison with the State-of-the-art In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$). Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima. Additionally, we observe that our unsupervised method performs competitively and even better compared with strong supervised and semi-supervised approaches. Ours-Procrustes obtains comparable results with Procrustes on EN-IT and gives strong results on EN-DE, EN-FI, EN-ES and the opposite directions. Ours-GeoMM$_{semi}$ obtains the state-of-the-art results on all tested language pairs except EN-FI, with the additional advantage of being fully unsupervised. Conclusion We investigated a regularization method to enhance unsupervised bilingual lexicon induction, by encouraging symmetry in lexical mapping between a pair of word embedding spaces. Results show that strengthening bi-directional mapping consistency significantly improves the effectiveness over the state-of-the-art method, leading to the best results on a standard benchmark.
an adversarial loss ($\ell _{adv}$) for each model as in the baseline, a cycle consistency loss ($\ell _{cycle}$) on each side
d0c79f4a5d5c45fe673d9fcb3cd0b7dd65df7636
d0c79f4a5d5c45fe673d9fcb3cd0b7dd65df7636_0
Q: What are new best results on standard benchmark? Text: Introduction Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9. Recent research has attempted to induce unsupervised bilingual lexicons by aligning monolingual word vector spaces BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Given a pair of languages, their word alignment is inherently a bi-directional problem (e.g. English-Italian vs Italian-English). However, most existing research considers mapping from one language to another without making use of symmetry. Our experiments show that separately learned UBLI models are not always consistent in opposite directions. As shown in Figure 1a, when the model of BIBREF11 Conneau18a is applied to English and Italian, the primal model maps the word “three” to the Italian word “tre”, but the dual model maps “tre” to “two” instead of “three”. We propose to address this issue by exploiting duality, encouraging forward and backward mappings to form a closed loop (Figure 1b). In particular, we extend the model of BIBREF11 Conneau18a by using a cycle consistency loss BIBREF16 to regularize two models in opposite directions. Experiments on two benchmark datasets show that the simple method of enforcing consistency gives better results in both directions. Our model significantly outperforms competitive baselines, obtaining the best published results. We release our code at xxx. Related Work UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets. Cycle Consistency. Forward-backward consistency has been used to discover the correspondence between unpaired images BIBREF21, BIBREF22. In machine translation, similar ideas were exploited, BIBREF23, BIBREF24 and BIBREF25 use dual learning to train two “opposite” language translators by minimizing the reconstruction loss. BIBREF26 consider back-translation, where a backward model is used to build synthetic parallel corpus and a forward model learns to generate genuine text based on the synthetic output. Closer to our method, BIBREF27 jointly train two autoencoders to learn supervised bilingual word embeddings. BIBREF28 use sinkhorn distance BIBREF29 and back-translation to align word embeddings. However, they cannot perform fully unsupervised training, relying on WGAN BIBREF30 for providing initial mappings. Concurrent with our work, BIBREF31 build a adversarial autoencoder with cycle consistency loss and post-cycle reconstruction loss. In contrast to these works, our method is fully unsupervised, simpler, and empirically more effective. Approach We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\lbrace x_1,...,x_n\rbrace $ and $Y=\lbrace y_1,...,y_m\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\mathcal {F}:X\rightarrow Y$ such that for each $x_i$, $\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\mathcal {G}:Y\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings. Approach ::: Baseline Adversarial Model BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\mathcal {F}$ tries to generate “fake” word embeddings $\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\mathcal {F}}$max$_{D{_y}}\ell _{adv}(\mathcal {F},D_y,X,Y)$, where $P_{D_y}(src|y_j)$ is a model probability from $D_y$ to distinguish whether word embedding $y_j$ is coming from the target language (src = 1) or the primal mapping $\mathcal {F}$ (src = 0). Similarly, the dual UBLI problem can be formulated as min$_{\mathcal {G}}$max$_{D_x}\ell _{adv}(\mathcal {G},D_x,Y,X)$, where $\mathcal {G}$ is the dual mapping, and $D_x$ is a source discriminator. Theoretically, a unique solution for above minmax game exists, with the mapping and the discriminator reaching a nash equilibrium. Since the adversarial training happens at the distribution level, no cross-lingual supervision is required. Approach ::: Regularizers for Dual Models We train $\mathcal {F}$ and $\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\mathcal {G}(\mathcal {F}(X))$ is similar to $X$ and $\mathcal {F}(\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\ell _{cycle}$) on each side to avoid $\mathcal {F}$ and $\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4. Cycle Consistency Loss. We introduce where $\Delta $ denotes the discrepancy criterion, which is set as the average cosine similarity in our model. Full objective. The final objective is: Approach ::: Model Selection We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores: Where $\lambda $ is a hyperparameter to control the importance of the two objectives. Here $S$ first generates bilingual lexicons by learned mappings, and then computes the average cosine similarity of these translations. Experiments We perform two sets of experiments, to investigate the effectiveness of our duality regularization in isolation (Section SECREF16) and to compare our final models with the state-of-the-art methods in the literature (Section SECREF18), respectively. Experiments ::: Experimental Settings Dataset and Setup. Our datasets includes: (i) The Multilingual Unsupervised and Supervised Embeddings (MUSE) dataset released by BIBREF11 Conneau18a. (ii) the more challenging Vecmap dataset from BIBREF32 Dinu15 and the extensions of BIBREF33 Artetxe17ACL. We follow the evaluation setups of BIBREF11, utilizing cross-domain similarity local scaling (CSLS) for retrieving the translation of given source words. Following a standard evaluation practice BIBREF34, BIBREF35, BIBREF11, we report precision at 1 scores (P@1). Given the instability of existing methods, we follow BIBREF13 to perform 10 runs for each method and report the best and the average accuracies. Experiments ::: The Effectiveness of Dual Learning We compare our method with BIBREF11 (Adv-C) under the same settings. As shown in Table TABREF12, our model outperforms Adv-C on both MUSE and Vecmap for all language pairs (except ES-EN). In addition, the proposed approach is less sensitive to initialization, and thus more stable than Adv-C over multiple runs. These results demonstrate the effectiveness of dual learning. Our method is also superior to Adv-C for the low-resource language pairs English $\leftrightarrow $ Malay (MS) and English $\leftrightarrow $ English-Esperanto (EO). Adv-C gives low performances on ES-EN, DE-EN, but much better results on the opposite directions on Vecmap. This is likely because the separate models are highly under-constrained, and thus easy to get stuck in poor local optima. In contrast, our method gives comparable results on both directions for the two languages, thanks to the use of information symmetry. Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization. Experiments ::: Comparison with the State-of-the-art In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$). Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima. Additionally, we observe that our unsupervised method performs competitively and even better compared with strong supervised and semi-supervised approaches. Ours-Procrustes obtains comparable results with Procrustes on EN-IT and gives strong results on EN-DE, EN-FI, EN-ES and the opposite directions. Ours-GeoMM$_{semi}$ obtains the state-of-the-art results on all tested language pairs except EN-FI, with the additional advantage of being fully unsupervised. Conclusion We investigated a regularization method to enhance unsupervised bilingual lexicon induction, by encouraging symmetry in lexical mapping between a pair of word embedding spaces. Results show that strengthening bi-directional mapping consistency significantly improves the effectiveness over the state-of-the-art method, leading to the best results on a standard benchmark.
New best results of accuracy (P@1) on Vecmap: Ours-GeoMMsemi: EN-IT 50.00 IT-EN 42.67 EN-DE 51.60 DE-EN 47.22 FI-EN 39.62 EN-ES 39.47 ES-EN 36.43
54c7fc08598b8b91a8c0399f6ab018c45e259f79
54c7fc08598b8b91a8c0399f6ab018c45e259f79_0
Q: How better is performance compared to competitive baselines? Text: Introduction Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9. Recent research has attempted to induce unsupervised bilingual lexicons by aligning monolingual word vector spaces BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Given a pair of languages, their word alignment is inherently a bi-directional problem (e.g. English-Italian vs Italian-English). However, most existing research considers mapping from one language to another without making use of symmetry. Our experiments show that separately learned UBLI models are not always consistent in opposite directions. As shown in Figure 1a, when the model of BIBREF11 Conneau18a is applied to English and Italian, the primal model maps the word “three” to the Italian word “tre”, but the dual model maps “tre” to “two” instead of “three”. We propose to address this issue by exploiting duality, encouraging forward and backward mappings to form a closed loop (Figure 1b). In particular, we extend the model of BIBREF11 Conneau18a by using a cycle consistency loss BIBREF16 to regularize two models in opposite directions. Experiments on two benchmark datasets show that the simple method of enforcing consistency gives better results in both directions. Our model significantly outperforms competitive baselines, obtaining the best published results. We release our code at xxx. Related Work UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets. Cycle Consistency. Forward-backward consistency has been used to discover the correspondence between unpaired images BIBREF21, BIBREF22. In machine translation, similar ideas were exploited, BIBREF23, BIBREF24 and BIBREF25 use dual learning to train two “opposite” language translators by minimizing the reconstruction loss. BIBREF26 consider back-translation, where a backward model is used to build synthetic parallel corpus and a forward model learns to generate genuine text based on the synthetic output. Closer to our method, BIBREF27 jointly train two autoencoders to learn supervised bilingual word embeddings. BIBREF28 use sinkhorn distance BIBREF29 and back-translation to align word embeddings. However, they cannot perform fully unsupervised training, relying on WGAN BIBREF30 for providing initial mappings. Concurrent with our work, BIBREF31 build a adversarial autoencoder with cycle consistency loss and post-cycle reconstruction loss. In contrast to these works, our method is fully unsupervised, simpler, and empirically more effective. Approach We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\lbrace x_1,...,x_n\rbrace $ and $Y=\lbrace y_1,...,y_m\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\mathcal {F}:X\rightarrow Y$ such that for each $x_i$, $\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\mathcal {G}:Y\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings. Approach ::: Baseline Adversarial Model BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\mathcal {F}$ tries to generate “fake” word embeddings $\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\mathcal {F}}$max$_{D{_y}}\ell _{adv}(\mathcal {F},D_y,X,Y)$, where $P_{D_y}(src|y_j)$ is a model probability from $D_y$ to distinguish whether word embedding $y_j$ is coming from the target language (src = 1) or the primal mapping $\mathcal {F}$ (src = 0). Similarly, the dual UBLI problem can be formulated as min$_{\mathcal {G}}$max$_{D_x}\ell _{adv}(\mathcal {G},D_x,Y,X)$, where $\mathcal {G}$ is the dual mapping, and $D_x$ is a source discriminator. Theoretically, a unique solution for above minmax game exists, with the mapping and the discriminator reaching a nash equilibrium. Since the adversarial training happens at the distribution level, no cross-lingual supervision is required. Approach ::: Regularizers for Dual Models We train $\mathcal {F}$ and $\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\mathcal {G}(\mathcal {F}(X))$ is similar to $X$ and $\mathcal {F}(\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\ell _{cycle}$) on each side to avoid $\mathcal {F}$ and $\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4. Cycle Consistency Loss. We introduce where $\Delta $ denotes the discrepancy criterion, which is set as the average cosine similarity in our model. Full objective. The final objective is: Approach ::: Model Selection We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores: Where $\lambda $ is a hyperparameter to control the importance of the two objectives. Here $S$ first generates bilingual lexicons by learned mappings, and then computes the average cosine similarity of these translations. Experiments We perform two sets of experiments, to investigate the effectiveness of our duality regularization in isolation (Section SECREF16) and to compare our final models with the state-of-the-art methods in the literature (Section SECREF18), respectively. Experiments ::: Experimental Settings Dataset and Setup. Our datasets includes: (i) The Multilingual Unsupervised and Supervised Embeddings (MUSE) dataset released by BIBREF11 Conneau18a. (ii) the more challenging Vecmap dataset from BIBREF32 Dinu15 and the extensions of BIBREF33 Artetxe17ACL. We follow the evaluation setups of BIBREF11, utilizing cross-domain similarity local scaling (CSLS) for retrieving the translation of given source words. Following a standard evaluation practice BIBREF34, BIBREF35, BIBREF11, we report precision at 1 scores (P@1). Given the instability of existing methods, we follow BIBREF13 to perform 10 runs for each method and report the best and the average accuracies. Experiments ::: The Effectiveness of Dual Learning We compare our method with BIBREF11 (Adv-C) under the same settings. As shown in Table TABREF12, our model outperforms Adv-C on both MUSE and Vecmap for all language pairs (except ES-EN). In addition, the proposed approach is less sensitive to initialization, and thus more stable than Adv-C over multiple runs. These results demonstrate the effectiveness of dual learning. Our method is also superior to Adv-C for the low-resource language pairs English $\leftrightarrow $ Malay (MS) and English $\leftrightarrow $ English-Esperanto (EO). Adv-C gives low performances on ES-EN, DE-EN, but much better results on the opposite directions on Vecmap. This is likely because the separate models are highly under-constrained, and thus easy to get stuck in poor local optima. In contrast, our method gives comparable results on both directions for the two languages, thanks to the use of information symmetry. Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization. Experiments ::: Comparison with the State-of-the-art In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$). Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima. Additionally, we observe that our unsupervised method performs competitively and even better compared with strong supervised and semi-supervised approaches. Ours-Procrustes obtains comparable results with Procrustes on EN-IT and gives strong results on EN-DE, EN-FI, EN-ES and the opposite directions. Ours-GeoMM$_{semi}$ obtains the state-of-the-art results on all tested language pairs except EN-FI, with the additional advantage of being fully unsupervised. Conclusion We investigated a regularization method to enhance unsupervised bilingual lexicon induction, by encouraging symmetry in lexical mapping between a pair of word embedding spaces. Results show that strengthening bi-directional mapping consistency significantly improves the effectiveness over the state-of-the-art method, leading to the best results on a standard benchmark.
Proposed method vs best baseline result on Vecmap (Accuracy P@1): EN-IT: 50 vs 50 IT-EN: 42.67 vs 42.67 EN-DE: 51.6 vs 51.47 DE-EN: 47.22 vs 46.96 EN-FI: 35.88 vs 36.24 FI-EN: 39.62 vs 39.57 EN-ES: 39.47 vs 39.30 ES-EN: 36.43 vs 36.06
5112bbf13c7cf644bf401daecb5e3265889a4bfc
5112bbf13c7cf644bf401daecb5e3265889a4bfc_0
Q: How big is data used in experiments? Text: Introduction Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9. Recent research has attempted to induce unsupervised bilingual lexicons by aligning monolingual word vector spaces BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Given a pair of languages, their word alignment is inherently a bi-directional problem (e.g. English-Italian vs Italian-English). However, most existing research considers mapping from one language to another without making use of symmetry. Our experiments show that separately learned UBLI models are not always consistent in opposite directions. As shown in Figure 1a, when the model of BIBREF11 Conneau18a is applied to English and Italian, the primal model maps the word “three” to the Italian word “tre”, but the dual model maps “tre” to “two” instead of “three”. We propose to address this issue by exploiting duality, encouraging forward and backward mappings to form a closed loop (Figure 1b). In particular, we extend the model of BIBREF11 Conneau18a by using a cycle consistency loss BIBREF16 to regularize two models in opposite directions. Experiments on two benchmark datasets show that the simple method of enforcing consistency gives better results in both directions. Our model significantly outperforms competitive baselines, obtaining the best published results. We release our code at xxx. Related Work UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets. Cycle Consistency. Forward-backward consistency has been used to discover the correspondence between unpaired images BIBREF21, BIBREF22. In machine translation, similar ideas were exploited, BIBREF23, BIBREF24 and BIBREF25 use dual learning to train two “opposite” language translators by minimizing the reconstruction loss. BIBREF26 consider back-translation, where a backward model is used to build synthetic parallel corpus and a forward model learns to generate genuine text based on the synthetic output. Closer to our method, BIBREF27 jointly train two autoencoders to learn supervised bilingual word embeddings. BIBREF28 use sinkhorn distance BIBREF29 and back-translation to align word embeddings. However, they cannot perform fully unsupervised training, relying on WGAN BIBREF30 for providing initial mappings. Concurrent with our work, BIBREF31 build a adversarial autoencoder with cycle consistency loss and post-cycle reconstruction loss. In contrast to these works, our method is fully unsupervised, simpler, and empirically more effective. Approach We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\lbrace x_1,...,x_n\rbrace $ and $Y=\lbrace y_1,...,y_m\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\mathcal {F}:X\rightarrow Y$ such that for each $x_i$, $\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\mathcal {G}:Y\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings. Approach ::: Baseline Adversarial Model BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\mathcal {F}$ tries to generate “fake” word embeddings $\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\mathcal {F}}$max$_{D{_y}}\ell _{adv}(\mathcal {F},D_y,X,Y)$, where $P_{D_y}(src|y_j)$ is a model probability from $D_y$ to distinguish whether word embedding $y_j$ is coming from the target language (src = 1) or the primal mapping $\mathcal {F}$ (src = 0). Similarly, the dual UBLI problem can be formulated as min$_{\mathcal {G}}$max$_{D_x}\ell _{adv}(\mathcal {G},D_x,Y,X)$, where $\mathcal {G}$ is the dual mapping, and $D_x$ is a source discriminator. Theoretically, a unique solution for above minmax game exists, with the mapping and the discriminator reaching a nash equilibrium. Since the adversarial training happens at the distribution level, no cross-lingual supervision is required. Approach ::: Regularizers for Dual Models We train $\mathcal {F}$ and $\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\mathcal {G}(\mathcal {F}(X))$ is similar to $X$ and $\mathcal {F}(\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\ell _{cycle}$) on each side to avoid $\mathcal {F}$ and $\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4. Cycle Consistency Loss. We introduce where $\Delta $ denotes the discrepancy criterion, which is set as the average cosine similarity in our model. Full objective. The final objective is: Approach ::: Model Selection We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores: Where $\lambda $ is a hyperparameter to control the importance of the two objectives. Here $S$ first generates bilingual lexicons by learned mappings, and then computes the average cosine similarity of these translations. Experiments We perform two sets of experiments, to investigate the effectiveness of our duality regularization in isolation (Section SECREF16) and to compare our final models with the state-of-the-art methods in the literature (Section SECREF18), respectively. Experiments ::: Experimental Settings Dataset and Setup. Our datasets includes: (i) The Multilingual Unsupervised and Supervised Embeddings (MUSE) dataset released by BIBREF11 Conneau18a. (ii) the more challenging Vecmap dataset from BIBREF32 Dinu15 and the extensions of BIBREF33 Artetxe17ACL. We follow the evaluation setups of BIBREF11, utilizing cross-domain similarity local scaling (CSLS) for retrieving the translation of given source words. Following a standard evaluation practice BIBREF34, BIBREF35, BIBREF11, we report precision at 1 scores (P@1). Given the instability of existing methods, we follow BIBREF13 to perform 10 runs for each method and report the best and the average accuracies. Experiments ::: The Effectiveness of Dual Learning We compare our method with BIBREF11 (Adv-C) under the same settings. As shown in Table TABREF12, our model outperforms Adv-C on both MUSE and Vecmap for all language pairs (except ES-EN). In addition, the proposed approach is less sensitive to initialization, and thus more stable than Adv-C over multiple runs. These results demonstrate the effectiveness of dual learning. Our method is also superior to Adv-C for the low-resource language pairs English $\leftrightarrow $ Malay (MS) and English $\leftrightarrow $ English-Esperanto (EO). Adv-C gives low performances on ES-EN, DE-EN, but much better results on the opposite directions on Vecmap. This is likely because the separate models are highly under-constrained, and thus easy to get stuck in poor local optima. In contrast, our method gives comparable results on both directions for the two languages, thanks to the use of information symmetry. Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization. Experiments ::: Comparison with the State-of-the-art In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$). Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima. Additionally, we observe that our unsupervised method performs competitively and even better compared with strong supervised and semi-supervised approaches. Ours-Procrustes obtains comparable results with Procrustes on EN-IT and gives strong results on EN-DE, EN-FI, EN-ES and the opposite directions. Ours-GeoMM$_{semi}$ obtains the state-of-the-art results on all tested language pairs except EN-FI, with the additional advantage of being fully unsupervised. Conclusion We investigated a regularization method to enhance unsupervised bilingual lexicon induction, by encouraging symmetry in lexical mapping between a pair of word embedding spaces. Results show that strengthening bi-directional mapping consistency significantly improves the effectiveness over the state-of-the-art method, leading to the best results on a standard benchmark.
Unanswerable
03ce42ff53aa3f1775bc57e50012f6eb1998c480
03ce42ff53aa3f1775bc57e50012f6eb1998c480_0
Q: What 6 language pairs is experimented on? Text: Introduction Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9. Recent research has attempted to induce unsupervised bilingual lexicons by aligning monolingual word vector spaces BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Given a pair of languages, their word alignment is inherently a bi-directional problem (e.g. English-Italian vs Italian-English). However, most existing research considers mapping from one language to another without making use of symmetry. Our experiments show that separately learned UBLI models are not always consistent in opposite directions. As shown in Figure 1a, when the model of BIBREF11 Conneau18a is applied to English and Italian, the primal model maps the word “three” to the Italian word “tre”, but the dual model maps “tre” to “two” instead of “three”. We propose to address this issue by exploiting duality, encouraging forward and backward mappings to form a closed loop (Figure 1b). In particular, we extend the model of BIBREF11 Conneau18a by using a cycle consistency loss BIBREF16 to regularize two models in opposite directions. Experiments on two benchmark datasets show that the simple method of enforcing consistency gives better results in both directions. Our model significantly outperforms competitive baselines, obtaining the best published results. We release our code at xxx. Related Work UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets. Cycle Consistency. Forward-backward consistency has been used to discover the correspondence between unpaired images BIBREF21, BIBREF22. In machine translation, similar ideas were exploited, BIBREF23, BIBREF24 and BIBREF25 use dual learning to train two “opposite” language translators by minimizing the reconstruction loss. BIBREF26 consider back-translation, where a backward model is used to build synthetic parallel corpus and a forward model learns to generate genuine text based on the synthetic output. Closer to our method, BIBREF27 jointly train two autoencoders to learn supervised bilingual word embeddings. BIBREF28 use sinkhorn distance BIBREF29 and back-translation to align word embeddings. However, they cannot perform fully unsupervised training, relying on WGAN BIBREF30 for providing initial mappings. Concurrent with our work, BIBREF31 build a adversarial autoencoder with cycle consistency loss and post-cycle reconstruction loss. In contrast to these works, our method is fully unsupervised, simpler, and empirically more effective. Approach We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\lbrace x_1,...,x_n\rbrace $ and $Y=\lbrace y_1,...,y_m\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\mathcal {F}:X\rightarrow Y$ such that for each $x_i$, $\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\mathcal {G}:Y\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings. Approach ::: Baseline Adversarial Model BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\mathcal {F}$ tries to generate “fake” word embeddings $\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\mathcal {F}}$max$_{D{_y}}\ell _{adv}(\mathcal {F},D_y,X,Y)$, where $P_{D_y}(src|y_j)$ is a model probability from $D_y$ to distinguish whether word embedding $y_j$ is coming from the target language (src = 1) or the primal mapping $\mathcal {F}$ (src = 0). Similarly, the dual UBLI problem can be formulated as min$_{\mathcal {G}}$max$_{D_x}\ell _{adv}(\mathcal {G},D_x,Y,X)$, where $\mathcal {G}$ is the dual mapping, and $D_x$ is a source discriminator. Theoretically, a unique solution for above minmax game exists, with the mapping and the discriminator reaching a nash equilibrium. Since the adversarial training happens at the distribution level, no cross-lingual supervision is required. Approach ::: Regularizers for Dual Models We train $\mathcal {F}$ and $\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\mathcal {G}(\mathcal {F}(X))$ is similar to $X$ and $\mathcal {F}(\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\ell _{cycle}$) on each side to avoid $\mathcal {F}$ and $\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4. Cycle Consistency Loss. We introduce where $\Delta $ denotes the discrepancy criterion, which is set as the average cosine similarity in our model. Full objective. The final objective is: Approach ::: Model Selection We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores: Where $\lambda $ is a hyperparameter to control the importance of the two objectives. Here $S$ first generates bilingual lexicons by learned mappings, and then computes the average cosine similarity of these translations. Experiments We perform two sets of experiments, to investigate the effectiveness of our duality regularization in isolation (Section SECREF16) and to compare our final models with the state-of-the-art methods in the literature (Section SECREF18), respectively. Experiments ::: Experimental Settings Dataset and Setup. Our datasets includes: (i) The Multilingual Unsupervised and Supervised Embeddings (MUSE) dataset released by BIBREF11 Conneau18a. (ii) the more challenging Vecmap dataset from BIBREF32 Dinu15 and the extensions of BIBREF33 Artetxe17ACL. We follow the evaluation setups of BIBREF11, utilizing cross-domain similarity local scaling (CSLS) for retrieving the translation of given source words. Following a standard evaluation practice BIBREF34, BIBREF35, BIBREF11, we report precision at 1 scores (P@1). Given the instability of existing methods, we follow BIBREF13 to perform 10 runs for each method and report the best and the average accuracies. Experiments ::: The Effectiveness of Dual Learning We compare our method with BIBREF11 (Adv-C) under the same settings. As shown in Table TABREF12, our model outperforms Adv-C on both MUSE and Vecmap for all language pairs (except ES-EN). In addition, the proposed approach is less sensitive to initialization, and thus more stable than Adv-C over multiple runs. These results demonstrate the effectiveness of dual learning. Our method is also superior to Adv-C for the low-resource language pairs English $\leftrightarrow $ Malay (MS) and English $\leftrightarrow $ English-Esperanto (EO). Adv-C gives low performances on ES-EN, DE-EN, but much better results on the opposite directions on Vecmap. This is likely because the separate models are highly under-constrained, and thus easy to get stuck in poor local optima. In contrast, our method gives comparable results on both directions for the two languages, thanks to the use of information symmetry. Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization. Experiments ::: Comparison with the State-of-the-art In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$). Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima. Additionally, we observe that our unsupervised method performs competitively and even better compared with strong supervised and semi-supervised approaches. Ours-Procrustes obtains comparable results with Procrustes on EN-IT and gives strong results on EN-DE, EN-FI, EN-ES and the opposite directions. Ours-GeoMM$_{semi}$ obtains the state-of-the-art results on all tested language pairs except EN-FI, with the additional advantage of being fully unsupervised. Conclusion We investigated a regularization method to enhance unsupervised bilingual lexicon induction, by encouraging symmetry in lexical mapping between a pair of word embedding spaces. Results show that strengthening bi-directional mapping consistency significantly improves the effectiveness over the state-of-the-art method, leading to the best results on a standard benchmark.
EN<->ES EN<->DE EN<->IT EN<->EO EN<->MS EN<->FI
ebeedbb8eecdf118d543fdb5224ae610eef212c8
ebeedbb8eecdf118d543fdb5224ae610eef212c8_0
Q: What are current state-of-the-art methods that consider the two tasks independently? Text: Introduction Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9. Recent research has attempted to induce unsupervised bilingual lexicons by aligning monolingual word vector spaces BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Given a pair of languages, their word alignment is inherently a bi-directional problem (e.g. English-Italian vs Italian-English). However, most existing research considers mapping from one language to another without making use of symmetry. Our experiments show that separately learned UBLI models are not always consistent in opposite directions. As shown in Figure 1a, when the model of BIBREF11 Conneau18a is applied to English and Italian, the primal model maps the word “three” to the Italian word “tre”, but the dual model maps “tre” to “two” instead of “three”. We propose to address this issue by exploiting duality, encouraging forward and backward mappings to form a closed loop (Figure 1b). In particular, we extend the model of BIBREF11 Conneau18a by using a cycle consistency loss BIBREF16 to regularize two models in opposite directions. Experiments on two benchmark datasets show that the simple method of enforcing consistency gives better results in both directions. Our model significantly outperforms competitive baselines, obtaining the best published results. We release our code at xxx. Related Work UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets. Cycle Consistency. Forward-backward consistency has been used to discover the correspondence between unpaired images BIBREF21, BIBREF22. In machine translation, similar ideas were exploited, BIBREF23, BIBREF24 and BIBREF25 use dual learning to train two “opposite” language translators by minimizing the reconstruction loss. BIBREF26 consider back-translation, where a backward model is used to build synthetic parallel corpus and a forward model learns to generate genuine text based on the synthetic output. Closer to our method, BIBREF27 jointly train two autoencoders to learn supervised bilingual word embeddings. BIBREF28 use sinkhorn distance BIBREF29 and back-translation to align word embeddings. However, they cannot perform fully unsupervised training, relying on WGAN BIBREF30 for providing initial mappings. Concurrent with our work, BIBREF31 build a adversarial autoencoder with cycle consistency loss and post-cycle reconstruction loss. In contrast to these works, our method is fully unsupervised, simpler, and empirically more effective. Approach We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\lbrace x_1,...,x_n\rbrace $ and $Y=\lbrace y_1,...,y_m\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\mathcal {F}:X\rightarrow Y$ such that for each $x_i$, $\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\mathcal {G}:Y\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings. Approach ::: Baseline Adversarial Model BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\mathcal {F}$ tries to generate “fake” word embeddings $\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\mathcal {F}}$max$_{D{_y}}\ell _{adv}(\mathcal {F},D_y,X,Y)$, where $P_{D_y}(src|y_j)$ is a model probability from $D_y$ to distinguish whether word embedding $y_j$ is coming from the target language (src = 1) or the primal mapping $\mathcal {F}$ (src = 0). Similarly, the dual UBLI problem can be formulated as min$_{\mathcal {G}}$max$_{D_x}\ell _{adv}(\mathcal {G},D_x,Y,X)$, where $\mathcal {G}$ is the dual mapping, and $D_x$ is a source discriminator. Theoretically, a unique solution for above minmax game exists, with the mapping and the discriminator reaching a nash equilibrium. Since the adversarial training happens at the distribution level, no cross-lingual supervision is required. Approach ::: Regularizers for Dual Models We train $\mathcal {F}$ and $\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\mathcal {G}(\mathcal {F}(X))$ is similar to $X$ and $\mathcal {F}(\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\ell _{cycle}$) on each side to avoid $\mathcal {F}$ and $\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4. Cycle Consistency Loss. We introduce where $\Delta $ denotes the discrepancy criterion, which is set as the average cosine similarity in our model. Full objective. The final objective is: Approach ::: Model Selection We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores: Where $\lambda $ is a hyperparameter to control the importance of the two objectives. Here $S$ first generates bilingual lexicons by learned mappings, and then computes the average cosine similarity of these translations. Experiments We perform two sets of experiments, to investigate the effectiveness of our duality regularization in isolation (Section SECREF16) and to compare our final models with the state-of-the-art methods in the literature (Section SECREF18), respectively. Experiments ::: Experimental Settings Dataset and Setup. Our datasets includes: (i) The Multilingual Unsupervised and Supervised Embeddings (MUSE) dataset released by BIBREF11 Conneau18a. (ii) the more challenging Vecmap dataset from BIBREF32 Dinu15 and the extensions of BIBREF33 Artetxe17ACL. We follow the evaluation setups of BIBREF11, utilizing cross-domain similarity local scaling (CSLS) for retrieving the translation of given source words. Following a standard evaluation practice BIBREF34, BIBREF35, BIBREF11, we report precision at 1 scores (P@1). Given the instability of existing methods, we follow BIBREF13 to perform 10 runs for each method and report the best and the average accuracies. Experiments ::: The Effectiveness of Dual Learning We compare our method with BIBREF11 (Adv-C) under the same settings. As shown in Table TABREF12, our model outperforms Adv-C on both MUSE and Vecmap for all language pairs (except ES-EN). In addition, the proposed approach is less sensitive to initialization, and thus more stable than Adv-C over multiple runs. These results demonstrate the effectiveness of dual learning. Our method is also superior to Adv-C for the low-resource language pairs English $\leftrightarrow $ Malay (MS) and English $\leftrightarrow $ English-Esperanto (EO). Adv-C gives low performances on ES-EN, DE-EN, but much better results on the opposite directions on Vecmap. This is likely because the separate models are highly under-constrained, and thus easy to get stuck in poor local optima. In contrast, our method gives comparable results on both directions for the two languages, thanks to the use of information symmetry. Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization. Experiments ::: Comparison with the State-of-the-art In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$). Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima. Additionally, we observe that our unsupervised method performs competitively and even better compared with strong supervised and semi-supervised approaches. Ours-Procrustes obtains comparable results with Procrustes on EN-IT and gives strong results on EN-DE, EN-FI, EN-ES and the opposite directions. Ours-GeoMM$_{semi}$ obtains the state-of-the-art results on all tested language pairs except EN-FI, with the additional advantage of being fully unsupervised. Conclusion We investigated a regularization method to enhance unsupervised bilingual lexicon induction, by encouraging symmetry in lexical mapping between a pair of word embedding spaces. Results show that strengthening bi-directional mapping consistency significantly improves the effectiveness over the state-of-the-art method, leading to the best results on a standard benchmark.
Procrustes, GPA, GeoMM, GeoMM$_{semi}$, Adv-C-Procrustes, Unsup-SL, Sinkhorn-BT
9efd025cfa69c6ff2777528bd158f79ead9353d1
9efd025cfa69c6ff2777528bd158f79ead9353d1_0
Q: How big is their training set? Text: Introduction The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted. As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher. The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence. Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods. Transformer network The core of our system is an entailment module based on a transformer network. Transformer networks BIBREF6 are deep networks applied to sequential input data, with each layer implementing multiple heads of scaled dot product attention. This attention mechanism allows deep features to be compared across positions in the input. Many entailment networks have two sequence inputs, but the transformer is designed with just one. A separator token divides the premise from the hypothesis. We use a specific transformer network released by OpenAI BIBREF5 that has been pre-trained for language modeling. The network consists of twelve blocks. Each block consists of a multi-head masked self-attention layer, layer normalization BIBREF7 , a feed forward network, and another layer normalization. After the twelfth block, two branches exist. In one branch, matrix multiplication and softmax layers are applied at the terminal sequence position to predict the entailment classification. In the other branch, a hidden state is multiplied by each token embedding and a softmax is taken to predict the next token. The language modeling branch has been pre-trained on the BookCorpus dataset BIBREF8 . We take the pre-trained model and train both branches on examples from FEVER. Reframing entailment The baseline FEVER system BIBREF0 ran the AllenNLP BIBREF3 implementation of Decomposable Attention BIBREF2 to classify a group of five premise statements concatenated together against the claim. These five premise statements were fixed by the retrieval module and not considered individually. In our system, premise statements are individually evaluated. We collect training data as the five sentences with the highest TFIDF score against the claim, taken from the Wikipedia pages selected by the retrieval module. If any ground truth evidence group for a claim requires more than one sentence, the claim is dropped from the training set. Otherwise, each sentence is labeled with the truth value of the claim if it is in the ground truth evidence set, and labeled as neutral if not. The resulting data forms an entailment problem that we call “FEVER One.” For comparison, we form “FEVER Five” and “FEVER Five Oracle” by concatenating all five retrieved sentences, as in the baseline. In FEVER Five Oracle, the ground truth is the claim ground truth (if verifiable), but in FEVER Five, ground truth depends on whether the retrieved evidence is in the ground truth evidence set. Several FEVER claims require multiple statements as evidence in order to be supported or refuted. The number of such claims is relatively small: in the first half of the development set, only 623 of 9999 claims were verifiable and had no singleton evidence groups. Furthermore, we disagreed with many of these annotations and thought that less evidence should have sufficed. Thus we chose not to develop a strategy for multiple evidence statements. To compare results on FEVER Five to FEVER One, we must aggregate decisions about individual sentences of possible evidence to a decision about the claim. We do this by applying the following rules: We resolve conflicts between supporting and refuting information in favor of the supporting information, because we observed cases in the development data where information was retrieved for different entities with the same name. For example, Ann Richards appeared both as a governor of Texas and as an Australian actress. Information that would be a contradiction regarding the actress should not stop evidence that would support a claim about the politician. Even if a sentence is in the evidence set, it might not be possible for the classifier to correctly determine whether it supports the claim, because the sentence could have pronouns with antecedents outside the given sentence. Ideally, a coreference resolution system could add this information to the sentence, but running one could be time consuming and introduce its own errors. As a cheap alternative, we make the classifier aware of the title of the Wikipedia page. We convert any undersores in the page title to spaces, and insert the title between brackets before the rest of each premise sentence. The dataset constructed in this way is called “FEVER Title One.” The FEVER baseline system works by solving FEVER Five Oracle. Using Decomposable Attention, it achieves .505 accuracy on the test half of the development set. Swapping in the Enhanced Sequential Inference Model (ESIM) BIBREF4 to solve FEVER Five Oracle results in an accuracy of .561. Because ESIM uses a single out-of-vocabulary (OOV) token for all unknown words, we expect it to confuse named entities. Thus we extend the model by allocating 10,000 indices for out-of-vocabulary words with randomly initialized embeddings, and taking a hash of each OOV word to select one of these indices. With extended ESIM, the accuracy is .586. Therefore, we run most later comparisons with extended ESIM or transformer networks as the entailment module, rather than Decomposable Attention. The FEVER One dataset is highly unbalanced in favor of neutral statements, so that the majority class baseline would achieve 93.0% on this data. In fact it makes training ESIM a challenge, as the model only learns the trivial majority class predictor if the natural training distribution is followed. We reweight the examples in FEVER One for ESIM so that each class contributes to the loss equally. Then, we use Cohen's Kappa rather than the accuracy to evaluate a model's quality, so that following the bias with purely random agreement is not rewarded in the evaluation. In Table 1 we compare FEVER One to FEVER Title One, both at the level of classifying individual support statements and of classifying the claim by aggregating these decisions as described above. On a support basis, we find a 52% increase in Kappa by adding the titles. When ESIM is replaced by the transformer network, class reweighting is not necessary. The network naturally learns to perform in excess of the majority class baseline. Cohen's Kappa is 68% higher than that for ESIM. The possibility of training on oracle labels for a concatenated set of evidence allows a classifier to simply guess whether the hypothesis is true and supported somewhere, rather than having to consider the relationship between hypothesis and premise. For example, it is possible to classify 67% of SNLI examples correctly without reading the premise BIBREF9 . As we show in Table 2 , for ESIM, we find that this kind of guessing makes the FEVER Title Five Oracle performance better than FEVER Title Five. The Transformer model is accurate enough that oracle guessing does not help. Both models perform best when classifying each bit of evidence separately and then aggregating. Improving retrieval Regardless of how strong the entailment classifier is, FEVER score is limited by whether the document and sentence retrieval modules, which produce the input to the entailment classifier, find the right evidence. In Table 3 , we examine the percentage of claims for which correct evidence is retrieved, before filtering with the entailment classifier. For this calculation, we skip any claim with an evidence group with multiple statements, and count a claim as succesfully retrieved if it is not verifiable or if the statement in one of the evidence groups is retrieved. The baseline system retrieves the five articles with the highest TFIDF score, and then extracts the five sentences from that collection with the highest TFIDF score against the claim. It achieves 66.1% evidence retrieval. Our first modification simply adds the title to each premise statement when computing its TFIDF against the claim, so that statements from a relevant article get credit even if the subject is not repeated. This raises evidence retrieval to 68.3%. A more significant boost comes from retrieving additional Wikipedia pages based on named entity recognition (NER). We start with phrases tagged as named entities by SpaCy BIBREF10 , but these tags are not very reliable, so we include various capitalized phrases. We retrieve Wikipedia pages whose title exactly matches one of these phrases. The named entity retrieval strategy boosts the evidence retrieval rate to 80.8%, while less than doubling the processing time. However, sometimes the named entity page thus retrieved is only a Wikipedia disambiguation page with no useful information. Noticing a lot of questions about films in the development set, we modify the strategy to also retrieve a page titled “X (film)” if it exists, whenever “X” is retrieved. The film retrievals raise evidence retrieval to 81.2%. Finally, we eliminate the TFIDF sentence ranking to expand sentence retrieval from five sentences to entire articles, up to the first fifty sentences from each. Thus we obtain 2.6 million statements to classify regarding the 19,998 claims in the shared task development set, for an average of 128 premises per claim. The evidence retrieval rate, including all these premises, increases to 90.1%. We continue to apply the entailment module trained with only five premise retrievals. Running the entailment module on this batch using a machine with three NVIDIA GeForce GTX 1080Ti GPU cards takes on the order of six hours. Retrieving more than five sentences means that we can no longer submit all retrieved evidence as support for the claims. Instead, we follow the aggregation strategy from Section "Reframing entailment" to decide the claim label, and only submit statements whose classification matches. Limiting evidence in this way when only five statements are retrieved (“narrow evidence” in Table 4 ) pushes FEVER score down very little, to .5550 from .5617 on the development set, so we have confidence that the extra retrieval will make up for the loss. Indeed, when the system reviews the extra evidence, FEVER score goes up to .5844 on the development set. Table 4 compares the end-to-end performance of systems that evaluate five retrieved statements together, evaluate five retrieved statements separately, and evaluate all statements from entire articles separately. Evaluating the statements separately gives better performance. We submit the systems that retrieve five statements and entire articles for evaluation on the test set, achieving preliminary FEVER scores of .5539 and .5736 respectively (label accuracy of .5754 and .6108, evidence recall of .6245 and .5002, evidence F1 of .2542 and .6485). In preliminary standings, the latter system ranks fourth in FEVER score and first in evidence F1. Discussion Our approach to FEVER involves a minimum of heuristics and relies mainly on the strength of the Transformer Network based entailment classification. The main performance gains come from adding retrievals that resolve named entities rather than matching the claim text only, filtering fewer of the retrievals, and making the entailment classifier somewhat aware of the topic of what it is reading by including the title. If higher quality and more plentiful multi-evidence claims would be constructed, it would be nice to incorporate dynamic retrievals into the system, allowing the classifier to decide that it needs more information about keywords it encountered during reading.
Unanswerable
559c1307610a15427caeb8aff4d2c01ae5c9de20
559c1307610a15427caeb8aff4d2c01ae5c9de20_0
Q: What baseline do they compare to? Text: Introduction The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted. As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher. The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence. Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods. Transformer network The core of our system is an entailment module based on a transformer network. Transformer networks BIBREF6 are deep networks applied to sequential input data, with each layer implementing multiple heads of scaled dot product attention. This attention mechanism allows deep features to be compared across positions in the input. Many entailment networks have two sequence inputs, but the transformer is designed with just one. A separator token divides the premise from the hypothesis. We use a specific transformer network released by OpenAI BIBREF5 that has been pre-trained for language modeling. The network consists of twelve blocks. Each block consists of a multi-head masked self-attention layer, layer normalization BIBREF7 , a feed forward network, and another layer normalization. After the twelfth block, two branches exist. In one branch, matrix multiplication and softmax layers are applied at the terminal sequence position to predict the entailment classification. In the other branch, a hidden state is multiplied by each token embedding and a softmax is taken to predict the next token. The language modeling branch has been pre-trained on the BookCorpus dataset BIBREF8 . We take the pre-trained model and train both branches on examples from FEVER. Reframing entailment The baseline FEVER system BIBREF0 ran the AllenNLP BIBREF3 implementation of Decomposable Attention BIBREF2 to classify a group of five premise statements concatenated together against the claim. These five premise statements were fixed by the retrieval module and not considered individually. In our system, premise statements are individually evaluated. We collect training data as the five sentences with the highest TFIDF score against the claim, taken from the Wikipedia pages selected by the retrieval module. If any ground truth evidence group for a claim requires more than one sentence, the claim is dropped from the training set. Otherwise, each sentence is labeled with the truth value of the claim if it is in the ground truth evidence set, and labeled as neutral if not. The resulting data forms an entailment problem that we call “FEVER One.” For comparison, we form “FEVER Five” and “FEVER Five Oracle” by concatenating all five retrieved sentences, as in the baseline. In FEVER Five Oracle, the ground truth is the claim ground truth (if verifiable), but in FEVER Five, ground truth depends on whether the retrieved evidence is in the ground truth evidence set. Several FEVER claims require multiple statements as evidence in order to be supported or refuted. The number of such claims is relatively small: in the first half of the development set, only 623 of 9999 claims were verifiable and had no singleton evidence groups. Furthermore, we disagreed with many of these annotations and thought that less evidence should have sufficed. Thus we chose not to develop a strategy for multiple evidence statements. To compare results on FEVER Five to FEVER One, we must aggregate decisions about individual sentences of possible evidence to a decision about the claim. We do this by applying the following rules: We resolve conflicts between supporting and refuting information in favor of the supporting information, because we observed cases in the development data where information was retrieved for different entities with the same name. For example, Ann Richards appeared both as a governor of Texas and as an Australian actress. Information that would be a contradiction regarding the actress should not stop evidence that would support a claim about the politician. Even if a sentence is in the evidence set, it might not be possible for the classifier to correctly determine whether it supports the claim, because the sentence could have pronouns with antecedents outside the given sentence. Ideally, a coreference resolution system could add this information to the sentence, but running one could be time consuming and introduce its own errors. As a cheap alternative, we make the classifier aware of the title of the Wikipedia page. We convert any undersores in the page title to spaces, and insert the title between brackets before the rest of each premise sentence. The dataset constructed in this way is called “FEVER Title One.” The FEVER baseline system works by solving FEVER Five Oracle. Using Decomposable Attention, it achieves .505 accuracy on the test half of the development set. Swapping in the Enhanced Sequential Inference Model (ESIM) BIBREF4 to solve FEVER Five Oracle results in an accuracy of .561. Because ESIM uses a single out-of-vocabulary (OOV) token for all unknown words, we expect it to confuse named entities. Thus we extend the model by allocating 10,000 indices for out-of-vocabulary words with randomly initialized embeddings, and taking a hash of each OOV word to select one of these indices. With extended ESIM, the accuracy is .586. Therefore, we run most later comparisons with extended ESIM or transformer networks as the entailment module, rather than Decomposable Attention. The FEVER One dataset is highly unbalanced in favor of neutral statements, so that the majority class baseline would achieve 93.0% on this data. In fact it makes training ESIM a challenge, as the model only learns the trivial majority class predictor if the natural training distribution is followed. We reweight the examples in FEVER One for ESIM so that each class contributes to the loss equally. Then, we use Cohen's Kappa rather than the accuracy to evaluate a model's quality, so that following the bias with purely random agreement is not rewarded in the evaluation. In Table 1 we compare FEVER One to FEVER Title One, both at the level of classifying individual support statements and of classifying the claim by aggregating these decisions as described above. On a support basis, we find a 52% increase in Kappa by adding the titles. When ESIM is replaced by the transformer network, class reweighting is not necessary. The network naturally learns to perform in excess of the majority class baseline. Cohen's Kappa is 68% higher than that for ESIM. The possibility of training on oracle labels for a concatenated set of evidence allows a classifier to simply guess whether the hypothesis is true and supported somewhere, rather than having to consider the relationship between hypothesis and premise. For example, it is possible to classify 67% of SNLI examples correctly without reading the premise BIBREF9 . As we show in Table 2 , for ESIM, we find that this kind of guessing makes the FEVER Title Five Oracle performance better than FEVER Title Five. The Transformer model is accurate enough that oracle guessing does not help. Both models perform best when classifying each bit of evidence separately and then aggregating. Improving retrieval Regardless of how strong the entailment classifier is, FEVER score is limited by whether the document and sentence retrieval modules, which produce the input to the entailment classifier, find the right evidence. In Table 3 , we examine the percentage of claims for which correct evidence is retrieved, before filtering with the entailment classifier. For this calculation, we skip any claim with an evidence group with multiple statements, and count a claim as succesfully retrieved if it is not verifiable or if the statement in one of the evidence groups is retrieved. The baseline system retrieves the five articles with the highest TFIDF score, and then extracts the five sentences from that collection with the highest TFIDF score against the claim. It achieves 66.1% evidence retrieval. Our first modification simply adds the title to each premise statement when computing its TFIDF against the claim, so that statements from a relevant article get credit even if the subject is not repeated. This raises evidence retrieval to 68.3%. A more significant boost comes from retrieving additional Wikipedia pages based on named entity recognition (NER). We start with phrases tagged as named entities by SpaCy BIBREF10 , but these tags are not very reliable, so we include various capitalized phrases. We retrieve Wikipedia pages whose title exactly matches one of these phrases. The named entity retrieval strategy boosts the evidence retrieval rate to 80.8%, while less than doubling the processing time. However, sometimes the named entity page thus retrieved is only a Wikipedia disambiguation page with no useful information. Noticing a lot of questions about films in the development set, we modify the strategy to also retrieve a page titled “X (film)” if it exists, whenever “X” is retrieved. The film retrievals raise evidence retrieval to 81.2%. Finally, we eliminate the TFIDF sentence ranking to expand sentence retrieval from five sentences to entire articles, up to the first fifty sentences from each. Thus we obtain 2.6 million statements to classify regarding the 19,998 claims in the shared task development set, for an average of 128 premises per claim. The evidence retrieval rate, including all these premises, increases to 90.1%. We continue to apply the entailment module trained with only five premise retrievals. Running the entailment module on this batch using a machine with three NVIDIA GeForce GTX 1080Ti GPU cards takes on the order of six hours. Retrieving more than five sentences means that we can no longer submit all retrieved evidence as support for the claims. Instead, we follow the aggregation strategy from Section "Reframing entailment" to decide the claim label, and only submit statements whose classification matches. Limiting evidence in this way when only five statements are retrieved (“narrow evidence” in Table 4 ) pushes FEVER score down very little, to .5550 from .5617 on the development set, so we have confidence that the extra retrieval will make up for the loss. Indeed, when the system reviews the extra evidence, FEVER score goes up to .5844 on the development set. Table 4 compares the end-to-end performance of systems that evaluate five retrieved statements together, evaluate five retrieved statements separately, and evaluate all statements from entire articles separately. Evaluating the statements separately gives better performance. We submit the systems that retrieve five statements and entire articles for evaluation on the test set, achieving preliminary FEVER scores of .5539 and .5736 respectively (label accuracy of .5754 and .6108, evidence recall of .6245 and .5002, evidence F1 of .2542 and .6485). In preliminary standings, the latter system ranks fourth in FEVER score and first in evidence F1. Discussion Our approach to FEVER involves a minimum of heuristics and relies mainly on the strength of the Transformer Network based entailment classification. The main performance gains come from adding retrievals that resolve named entities rather than matching the claim text only, filtering fewer of the retrievals, and making the entailment classifier somewhat aware of the topic of what it is reading by including the title. If higher quality and more plentiful multi-evidence claims would be constructed, it would be nice to incorporate dynamic retrievals into the system, allowing the classifier to decide that it needs more information about keywords it encountered during reading.
For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 .
4ecb6674bcb4162bf71aea8d8b82759255875df3
4ecb6674bcb4162bf71aea8d8b82759255875df3_0
Q: Which pre-trained transformer do they use? Text: Introduction The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted. As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher. The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence. Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods. Transformer network The core of our system is an entailment module based on a transformer network. Transformer networks BIBREF6 are deep networks applied to sequential input data, with each layer implementing multiple heads of scaled dot product attention. This attention mechanism allows deep features to be compared across positions in the input. Many entailment networks have two sequence inputs, but the transformer is designed with just one. A separator token divides the premise from the hypothesis. We use a specific transformer network released by OpenAI BIBREF5 that has been pre-trained for language modeling. The network consists of twelve blocks. Each block consists of a multi-head masked self-attention layer, layer normalization BIBREF7 , a feed forward network, and another layer normalization. After the twelfth block, two branches exist. In one branch, matrix multiplication and softmax layers are applied at the terminal sequence position to predict the entailment classification. In the other branch, a hidden state is multiplied by each token embedding and a softmax is taken to predict the next token. The language modeling branch has been pre-trained on the BookCorpus dataset BIBREF8 . We take the pre-trained model and train both branches on examples from FEVER. Reframing entailment The baseline FEVER system BIBREF0 ran the AllenNLP BIBREF3 implementation of Decomposable Attention BIBREF2 to classify a group of five premise statements concatenated together against the claim. These five premise statements were fixed by the retrieval module and not considered individually. In our system, premise statements are individually evaluated. We collect training data as the five sentences with the highest TFIDF score against the claim, taken from the Wikipedia pages selected by the retrieval module. If any ground truth evidence group for a claim requires more than one sentence, the claim is dropped from the training set. Otherwise, each sentence is labeled with the truth value of the claim if it is in the ground truth evidence set, and labeled as neutral if not. The resulting data forms an entailment problem that we call “FEVER One.” For comparison, we form “FEVER Five” and “FEVER Five Oracle” by concatenating all five retrieved sentences, as in the baseline. In FEVER Five Oracle, the ground truth is the claim ground truth (if verifiable), but in FEVER Five, ground truth depends on whether the retrieved evidence is in the ground truth evidence set. Several FEVER claims require multiple statements as evidence in order to be supported or refuted. The number of such claims is relatively small: in the first half of the development set, only 623 of 9999 claims were verifiable and had no singleton evidence groups. Furthermore, we disagreed with many of these annotations and thought that less evidence should have sufficed. Thus we chose not to develop a strategy for multiple evidence statements. To compare results on FEVER Five to FEVER One, we must aggregate decisions about individual sentences of possible evidence to a decision about the claim. We do this by applying the following rules: We resolve conflicts between supporting and refuting information in favor of the supporting information, because we observed cases in the development data where information was retrieved for different entities with the same name. For example, Ann Richards appeared both as a governor of Texas and as an Australian actress. Information that would be a contradiction regarding the actress should not stop evidence that would support a claim about the politician. Even if a sentence is in the evidence set, it might not be possible for the classifier to correctly determine whether it supports the claim, because the sentence could have pronouns with antecedents outside the given sentence. Ideally, a coreference resolution system could add this information to the sentence, but running one could be time consuming and introduce its own errors. As a cheap alternative, we make the classifier aware of the title of the Wikipedia page. We convert any undersores in the page title to spaces, and insert the title between brackets before the rest of each premise sentence. The dataset constructed in this way is called “FEVER Title One.” The FEVER baseline system works by solving FEVER Five Oracle. Using Decomposable Attention, it achieves .505 accuracy on the test half of the development set. Swapping in the Enhanced Sequential Inference Model (ESIM) BIBREF4 to solve FEVER Five Oracle results in an accuracy of .561. Because ESIM uses a single out-of-vocabulary (OOV) token for all unknown words, we expect it to confuse named entities. Thus we extend the model by allocating 10,000 indices for out-of-vocabulary words with randomly initialized embeddings, and taking a hash of each OOV word to select one of these indices. With extended ESIM, the accuracy is .586. Therefore, we run most later comparisons with extended ESIM or transformer networks as the entailment module, rather than Decomposable Attention. The FEVER One dataset is highly unbalanced in favor of neutral statements, so that the majority class baseline would achieve 93.0% on this data. In fact it makes training ESIM a challenge, as the model only learns the trivial majority class predictor if the natural training distribution is followed. We reweight the examples in FEVER One for ESIM so that each class contributes to the loss equally. Then, we use Cohen's Kappa rather than the accuracy to evaluate a model's quality, so that following the bias with purely random agreement is not rewarded in the evaluation. In Table 1 we compare FEVER One to FEVER Title One, both at the level of classifying individual support statements and of classifying the claim by aggregating these decisions as described above. On a support basis, we find a 52% increase in Kappa by adding the titles. When ESIM is replaced by the transformer network, class reweighting is not necessary. The network naturally learns to perform in excess of the majority class baseline. Cohen's Kappa is 68% higher than that for ESIM. The possibility of training on oracle labels for a concatenated set of evidence allows a classifier to simply guess whether the hypothesis is true and supported somewhere, rather than having to consider the relationship between hypothesis and premise. For example, it is possible to classify 67% of SNLI examples correctly without reading the premise BIBREF9 . As we show in Table 2 , for ESIM, we find that this kind of guessing makes the FEVER Title Five Oracle performance better than FEVER Title Five. The Transformer model is accurate enough that oracle guessing does not help. Both models perform best when classifying each bit of evidence separately and then aggregating. Improving retrieval Regardless of how strong the entailment classifier is, FEVER score is limited by whether the document and sentence retrieval modules, which produce the input to the entailment classifier, find the right evidence. In Table 3 , we examine the percentage of claims for which correct evidence is retrieved, before filtering with the entailment classifier. For this calculation, we skip any claim with an evidence group with multiple statements, and count a claim as succesfully retrieved if it is not verifiable or if the statement in one of the evidence groups is retrieved. The baseline system retrieves the five articles with the highest TFIDF score, and then extracts the five sentences from that collection with the highest TFIDF score against the claim. It achieves 66.1% evidence retrieval. Our first modification simply adds the title to each premise statement when computing its TFIDF against the claim, so that statements from a relevant article get credit even if the subject is not repeated. This raises evidence retrieval to 68.3%. A more significant boost comes from retrieving additional Wikipedia pages based on named entity recognition (NER). We start with phrases tagged as named entities by SpaCy BIBREF10 , but these tags are not very reliable, so we include various capitalized phrases. We retrieve Wikipedia pages whose title exactly matches one of these phrases. The named entity retrieval strategy boosts the evidence retrieval rate to 80.8%, while less than doubling the processing time. However, sometimes the named entity page thus retrieved is only a Wikipedia disambiguation page with no useful information. Noticing a lot of questions about films in the development set, we modify the strategy to also retrieve a page titled “X (film)” if it exists, whenever “X” is retrieved. The film retrievals raise evidence retrieval to 81.2%. Finally, we eliminate the TFIDF sentence ranking to expand sentence retrieval from five sentences to entire articles, up to the first fifty sentences from each. Thus we obtain 2.6 million statements to classify regarding the 19,998 claims in the shared task development set, for an average of 128 premises per claim. The evidence retrieval rate, including all these premises, increases to 90.1%. We continue to apply the entailment module trained with only five premise retrievals. Running the entailment module on this batch using a machine with three NVIDIA GeForce GTX 1080Ti GPU cards takes on the order of six hours. Retrieving more than five sentences means that we can no longer submit all retrieved evidence as support for the claims. Instead, we follow the aggregation strategy from Section "Reframing entailment" to decide the claim label, and only submit statements whose classification matches. Limiting evidence in this way when only five statements are retrieved (“narrow evidence” in Table 4 ) pushes FEVER score down very little, to .5550 from .5617 on the development set, so we have confidence that the extra retrieval will make up for the loss. Indeed, when the system reviews the extra evidence, FEVER score goes up to .5844 on the development set. Table 4 compares the end-to-end performance of systems that evaluate five retrieved statements together, evaluate five retrieved statements separately, and evaluate all statements from entire articles separately. Evaluating the statements separately gives better performance. We submit the systems that retrieve five statements and entire articles for evaluation on the test set, achieving preliminary FEVER scores of .5539 and .5736 respectively (label accuracy of .5754 and .6108, evidence recall of .6245 and .5002, evidence F1 of .2542 and .6485). In preliminary standings, the latter system ranks fourth in FEVER score and first in evidence F1. Discussion Our approach to FEVER involves a minimum of heuristics and relies mainly on the strength of the Transformer Network based entailment classification. The main performance gains come from adding retrievals that resolve named entities rather than matching the claim text only, filtering fewer of the retrievals, and making the entailment classifier somewhat aware of the topic of what it is reading by including the title. If higher quality and more plentiful multi-evidence claims would be constructed, it would be nice to incorporate dynamic retrievals into the system, allowing the classifier to decide that it needs more information about keywords it encountered during reading.
BIBREF5
eacc1eb65daad055df934e0e878f417b73b2ecc1
eacc1eb65daad055df934e0e878f417b73b2ecc1_0
Q: What is the FEVER task? Text: Introduction The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted. As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher. The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence. Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods. Transformer network The core of our system is an entailment module based on a transformer network. Transformer networks BIBREF6 are deep networks applied to sequential input data, with each layer implementing multiple heads of scaled dot product attention. This attention mechanism allows deep features to be compared across positions in the input. Many entailment networks have two sequence inputs, but the transformer is designed with just one. A separator token divides the premise from the hypothesis. We use a specific transformer network released by OpenAI BIBREF5 that has been pre-trained for language modeling. The network consists of twelve blocks. Each block consists of a multi-head masked self-attention layer, layer normalization BIBREF7 , a feed forward network, and another layer normalization. After the twelfth block, two branches exist. In one branch, matrix multiplication and softmax layers are applied at the terminal sequence position to predict the entailment classification. In the other branch, a hidden state is multiplied by each token embedding and a softmax is taken to predict the next token. The language modeling branch has been pre-trained on the BookCorpus dataset BIBREF8 . We take the pre-trained model and train both branches on examples from FEVER. Reframing entailment The baseline FEVER system BIBREF0 ran the AllenNLP BIBREF3 implementation of Decomposable Attention BIBREF2 to classify a group of five premise statements concatenated together against the claim. These five premise statements were fixed by the retrieval module and not considered individually. In our system, premise statements are individually evaluated. We collect training data as the five sentences with the highest TFIDF score against the claim, taken from the Wikipedia pages selected by the retrieval module. If any ground truth evidence group for a claim requires more than one sentence, the claim is dropped from the training set. Otherwise, each sentence is labeled with the truth value of the claim if it is in the ground truth evidence set, and labeled as neutral if not. The resulting data forms an entailment problem that we call “FEVER One.” For comparison, we form “FEVER Five” and “FEVER Five Oracle” by concatenating all five retrieved sentences, as in the baseline. In FEVER Five Oracle, the ground truth is the claim ground truth (if verifiable), but in FEVER Five, ground truth depends on whether the retrieved evidence is in the ground truth evidence set. Several FEVER claims require multiple statements as evidence in order to be supported or refuted. The number of such claims is relatively small: in the first half of the development set, only 623 of 9999 claims were verifiable and had no singleton evidence groups. Furthermore, we disagreed with many of these annotations and thought that less evidence should have sufficed. Thus we chose not to develop a strategy for multiple evidence statements. To compare results on FEVER Five to FEVER One, we must aggregate decisions about individual sentences of possible evidence to a decision about the claim. We do this by applying the following rules: We resolve conflicts between supporting and refuting information in favor of the supporting information, because we observed cases in the development data where information was retrieved for different entities with the same name. For example, Ann Richards appeared both as a governor of Texas and as an Australian actress. Information that would be a contradiction regarding the actress should not stop evidence that would support a claim about the politician. Even if a sentence is in the evidence set, it might not be possible for the classifier to correctly determine whether it supports the claim, because the sentence could have pronouns with antecedents outside the given sentence. Ideally, a coreference resolution system could add this information to the sentence, but running one could be time consuming and introduce its own errors. As a cheap alternative, we make the classifier aware of the title of the Wikipedia page. We convert any undersores in the page title to spaces, and insert the title between brackets before the rest of each premise sentence. The dataset constructed in this way is called “FEVER Title One.” The FEVER baseline system works by solving FEVER Five Oracle. Using Decomposable Attention, it achieves .505 accuracy on the test half of the development set. Swapping in the Enhanced Sequential Inference Model (ESIM) BIBREF4 to solve FEVER Five Oracle results in an accuracy of .561. Because ESIM uses a single out-of-vocabulary (OOV) token for all unknown words, we expect it to confuse named entities. Thus we extend the model by allocating 10,000 indices for out-of-vocabulary words with randomly initialized embeddings, and taking a hash of each OOV word to select one of these indices. With extended ESIM, the accuracy is .586. Therefore, we run most later comparisons with extended ESIM or transformer networks as the entailment module, rather than Decomposable Attention. The FEVER One dataset is highly unbalanced in favor of neutral statements, so that the majority class baseline would achieve 93.0% on this data. In fact it makes training ESIM a challenge, as the model only learns the trivial majority class predictor if the natural training distribution is followed. We reweight the examples in FEVER One for ESIM so that each class contributes to the loss equally. Then, we use Cohen's Kappa rather than the accuracy to evaluate a model's quality, so that following the bias with purely random agreement is not rewarded in the evaluation. In Table 1 we compare FEVER One to FEVER Title One, both at the level of classifying individual support statements and of classifying the claim by aggregating these decisions as described above. On a support basis, we find a 52% increase in Kappa by adding the titles. When ESIM is replaced by the transformer network, class reweighting is not necessary. The network naturally learns to perform in excess of the majority class baseline. Cohen's Kappa is 68% higher than that for ESIM. The possibility of training on oracle labels for a concatenated set of evidence allows a classifier to simply guess whether the hypothesis is true and supported somewhere, rather than having to consider the relationship between hypothesis and premise. For example, it is possible to classify 67% of SNLI examples correctly without reading the premise BIBREF9 . As we show in Table 2 , for ESIM, we find that this kind of guessing makes the FEVER Title Five Oracle performance better than FEVER Title Five. The Transformer model is accurate enough that oracle guessing does not help. Both models perform best when classifying each bit of evidence separately and then aggregating. Improving retrieval Regardless of how strong the entailment classifier is, FEVER score is limited by whether the document and sentence retrieval modules, which produce the input to the entailment classifier, find the right evidence. In Table 3 , we examine the percentage of claims for which correct evidence is retrieved, before filtering with the entailment classifier. For this calculation, we skip any claim with an evidence group with multiple statements, and count a claim as succesfully retrieved if it is not verifiable or if the statement in one of the evidence groups is retrieved. The baseline system retrieves the five articles with the highest TFIDF score, and then extracts the five sentences from that collection with the highest TFIDF score against the claim. It achieves 66.1% evidence retrieval. Our first modification simply adds the title to each premise statement when computing its TFIDF against the claim, so that statements from a relevant article get credit even if the subject is not repeated. This raises evidence retrieval to 68.3%. A more significant boost comes from retrieving additional Wikipedia pages based on named entity recognition (NER). We start with phrases tagged as named entities by SpaCy BIBREF10 , but these tags are not very reliable, so we include various capitalized phrases. We retrieve Wikipedia pages whose title exactly matches one of these phrases. The named entity retrieval strategy boosts the evidence retrieval rate to 80.8%, while less than doubling the processing time. However, sometimes the named entity page thus retrieved is only a Wikipedia disambiguation page with no useful information. Noticing a lot of questions about films in the development set, we modify the strategy to also retrieve a page titled “X (film)” if it exists, whenever “X” is retrieved. The film retrievals raise evidence retrieval to 81.2%. Finally, we eliminate the TFIDF sentence ranking to expand sentence retrieval from five sentences to entire articles, up to the first fifty sentences from each. Thus we obtain 2.6 million statements to classify regarding the 19,998 claims in the shared task development set, for an average of 128 premises per claim. The evidence retrieval rate, including all these premises, increases to 90.1%. We continue to apply the entailment module trained with only five premise retrievals. Running the entailment module on this batch using a machine with three NVIDIA GeForce GTX 1080Ti GPU cards takes on the order of six hours. Retrieving more than five sentences means that we can no longer submit all retrieved evidence as support for the claims. Instead, we follow the aggregation strategy from Section "Reframing entailment" to decide the claim label, and only submit statements whose classification matches. Limiting evidence in this way when only five statements are retrieved (“narrow evidence” in Table 4 ) pushes FEVER score down very little, to .5550 from .5617 on the development set, so we have confidence that the extra retrieval will make up for the loss. Indeed, when the system reviews the extra evidence, FEVER score goes up to .5844 on the development set. Table 4 compares the end-to-end performance of systems that evaluate five retrieved statements together, evaluate five retrieved statements separately, and evaluate all statements from entire articles separately. Evaluating the statements separately gives better performance. We submit the systems that retrieve five statements and entire articles for evaluation on the test set, achieving preliminary FEVER scores of .5539 and .5736 respectively (label accuracy of .5754 and .6108, evidence recall of .6245 and .5002, evidence F1 of .2542 and .6485). In preliminary standings, the latter system ranks fourth in FEVER score and first in evidence F1. Discussion Our approach to FEVER involves a minimum of heuristics and relies mainly on the strength of the Transformer Network based entailment classification. The main performance gains come from adding retrievals that resolve named entities rather than matching the claim text only, filtering fewer of the retrievals, and making the entailment classifier somewhat aware of the topic of what it is reading by including the title. If higher quality and more plentiful multi-evidence claims would be constructed, it would be nice to incorporate dynamic retrievals into the system, allowing the classifier to decide that it needs more information about keywords it encountered during reading.
tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem
d353a6bbdc66be9298494d0c853e0d8d752dec4b
d353a6bbdc66be9298494d0c853e0d8d752dec4b_0
Q: How is correctness of automatic derivation proved? Text: Introduction Accurate and efficient computation of derivatives is vital for a wide variety of computing applications, including numerical optimization, solution of nonlinear equations, sensitivity analysis, and nonlinear inverse problems. Virtually every process could be described with a mathematical function, which can be thought of as an association between elements from different sets. Derivatives track how a varying quantity depends on another quantity, for example how the position of a planet varies as time varies. Derivatives and gradients (vectors of partial derivatives of multivariable functions) allow us to explore the properties of a function and thus the described process as a whole. Gradients are an essential component in gradient-based optimization methods, which have become more and more important in recent years, in particular with its application training of (deep) neural networks BIBREF0. Several different techniques are commonly used to compute the derivatives of a given function, either exactly or approximately BIBREF1, BIBREF0, BIBREF2. The most prevalent techniques are: Numerical differentiation, based on the finite difference method, provides a way to evaluate derivatives approximately. While simple, numerical differentiation can be slow (the run-time complexity grows linearly with the number of input variables) and may have problems with accuracy due to round-off and truncation errors. Symbolic differentiation, based on transformations of symbolic expressions of functions, provides exact closed-form expressions for the derivatives. It faces difficulties when the function to be differentiated is not available in a closed form, which is often the case for computer programs which may contain control flow. Symbolic differentiation can produce derivative expressions that are computationally expensive to evaluate due to difficulties in exploiting common subexpressions. Automatic differentiation (AD) computes derivatives accurately to the precision of the original function, supports control flow and uses at most a small constant factor more time and space than it takes to evaluate the original function, at the expense of increased implementation complexity and introducing more software dependencies. Numerical and symbolic differentiation methods are slow at computing gradients of functions with many input variables, as is often needed for gradient-based optimization algorithms. Both methods have problems calculating higher-order derivatives, where the complexity and errors due to numerical precision increase. Automatic differentiation largely avoids the problems of numerical and symbolic differentiation. In this paper, we describe the implementation of automatic differentiation techniques in ROOT, which is the data analysis framework broadly used High-Energy Physics BIBREF3. This implementation is based on Clad BIBREF4, BIBREF5, which is an automatic differentiation plugin for computation expressed in C/C++. Background Here, we briefly discuss main algorithmic and implementation principles behind AD. An in-depth overview and more formal description can be found in BIBREF1 and BIBREF2, respectively. Background ::: AD and its Modes AD is based on the decomposition of the procedure (e.g. a source code that computes the original function) into a sequence of simple mathematical operations (e.g. $+, -, *, /, \sin , \cos , \exp $) that can be expressed using a series of intermediate results. Subsequently, derivatives of every intermediate result are evaluated and combined via the chain rule of calculus to obtain the derivatives of the whole sequence. The control flow (e.g. branches, loops) can be incorporated by differentiating the control flow of the original function during the derivative evaluation. Two main modes of AD, which differ in the order of application of the chain rule, are used: Forward mode operates in a top-down approach and computes the derivative of every intermediate result with respect to a single selected input variable of the function. As soon as a final result of the function is reached, the partial derivative with respect to the selected input is available. A single evaluation of the forward mode can only compute partial derivatives with respect to a single input variable. Thus, when the whole gradient is required, forward mode must be invoked once per every input variable, leading to $m \cdot c_{F} \cdot n$ runtime complexity, where $m$ is the number of input variables, $n$ is the algorithmic complexity of the original function and $c_{F} < 3 $ is a small constant factor overhead of a single invocation of the forward mode BIBREF2. Reverse mode operates in a bottom-up approach and computes the derivative of a function's output with respect to every intermediate result. Once every input variable of the function is reached, the whole gradient of an output is available. Note that, independently on the number of input variables $N$, a single evaluation of the reverse mode is sufficient to get the whole gradient of a function's output, leading to $c_{R} \cdot n$ runtime complexity, where $n$ is the complexity of the original function and $c_{R} \le 4$ is a small constant factor overhead BIBREF2. This is a huge advantage in settings with a single scalar output and many inputs, which is often the case in machine-learning problems where $N >> 10^6$ that makes the forward mode infeasible. As a disadvantage, reverse mode implementations are more complicated, and dynamic memory allocations may be required when dynamic control flow is involved. Depending on the original function, this may cause a single evaluation of the reverse mode to be somewhat slower compared to a single evaluation of the forward mode. Background ::: AD Implementations AD techniques have been implemented in a variety of programming languages and paradigms, ranging from classical tools for Fortran BIBREF6 and C BIBREF7, to recent active work on tools specific to machine-learning applications BIBREF8, BIBREF9, and modern general-purpose programming languages BIBREF10, BIBREF11. We refer the reader to www.autodiff.org for a comprehensive list of available AD implementations for various languages. In particular, several implementations exist for C++, e.g. BIBREF12, BIBREF13, BIBREF14. Majority of implementations of AD fall into one of the two categories of implementation techniques: Tools based on operator overloading utilize features of programming languages like C++ and Python to define custom types and overload mathematical operators (e.g. +, -, *, /) and functions (e.g. $\exp , \sin , \cos $) on them. Such implementations are often based on custom AD-enabled types that wrap values of both the original and derivative functions and redefine operators to simultaneously act on original and derivative values. In C++, such tools are often implemented as a library that introduces templated differentiable types and corresponding mathematical operations. Then, functions called on the custom type return both original and derivative values. This is a powerful technique but has two primary limitations: legacy code and performance. Functions must be either polymorphic (templated) or explicitly defined on AD-enabled type to be differentiated. Differentiation of pre-existing source code using builtin types such as double and float is not possible. Users are required to use additional level of abstraction in the form of library-specific types instead of first-class language features. Moreover, the performance of the derivative generation can be suboptimal due to the C++ metaprogramming system which usually constructs deep template instantiation chains. Performance can be even more problematic when creating a higher order derivatives. Tools based on source transformation analyze the source code of the original function and build another source code for the derivative function. Such techniques typically accept and generate any code using built-in features of the original language and do not require custom libraries. On the other hand, they require an additional pass over the source file to analyze and generate derivative code. Source transformation can fully utilize source-level optimizations and has reasonably good performance. Implementation is more complicated and it is problematic to achieve full coverage of C++ language features. While full integration with a compiler can make AD a first-class language feature that is transparent for the user, most current implementations for C++ are based on custom parsers that do not have full coverage of the vast variety of C++ language constructs and require a separate step before compilation. Architecture and Implementation Automatic differentiation in ROOT is based on Clad BIBREF4, BIBREF5. Clad is a source transformation AD tool for C++. It is based on LLVM compiler infrastructure BIBREF15 and is implemented as a plugin for C++ compiler Clang, which allows Clad to be transparently integrated into the compilation phase and to utilize large parts of the compiler. Clad relies on Clang's parsing and code generation functionality and can differentiate complicated C++ constructs. Clad supports both forward and reverse mode. It is available as a standalone Clang plugin that, when attached to the compiler, produces derivatives in the compilation phase. On top of that, Clad is integrated directly into ROOT to provide AD functionality as an integral part of the framework. ROOT has a C++ interpreter Cling BIBREF16 which is built on the top of LLVM and Clang. This allows Clad to be attached to Cling as a plugin in a similar way as it can be attached to Clang. In this section, we discuss 1) architecture of Clad and its interaction with Cling; and 2) details of its integration into ROOT. Clad operates on Clang AST (abstract syntax tree) by analyzing the AST of the original function and generating the AST of the derivative. Clad provides two API functions: clad::differentiate for forward mode and clad::gradient for reverse mode, which can be used directly in the source code to mark a function for differentiation (see BIBREF5 for more details on usage and code examples). The information flow of interactions with Cling during differentiation (Figure FIGREF13) is: A function is marked for differentiation with the C++ construct clad::differentiate or clad::gradient (step 1). Cling in ROOT performs incremental compilation and receives an abstract syntax tree (AST) representation of the code (step 2). Cling detects the differentiation marker and sends the AST of the original function to Clad, which transforms the AST to produce the AST of the derivative (step 3). Clad returns the derivative AST to Cling for code generation and execution by the low level LLVM primitives (steps 4, 5, 6, 7). Alternatively, if Clad was configured for non-interactive use, the generated AST can be converted to a C++ source code and written to a text file. The generated code then can be compiled with any C++ compiler (steps 8, 9). Inside of ROOT, interface functions clad::differentiate and clad::gradient are accessible via include <Math/CladDerivator.h>. Clad is also directly integrated into the TFormula class that encapsulates the concept of multidimensional mathematical functions in ROOT. TFormula is a primitive in ROOT's math package which is connected to the Cling interpreter. In the context of TFormula, Clad can differentiate functions available in the interpreter. The TFormula::GenerateGradientPar method uses Clad to differentiate the underlying code of the formula with respect to its parameters and generate the code for the gradient. TFormula::GradientPar method then evaluates the gradient at a specified point. Results In this section, we empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method) in ROOT. We show that AD can drastically improve accuracy and performance of derivative evaluation, compared to ND. Results ::: Accuracy As stated in Section SECREF1, numerical differentiation may give imprecise results while AD computes the derivatives exactly. We show an example of a function where this difference is apparent: AD provides exact result while ND suffers from the loss of accuracy. 2 The function is the PDF of Breit-Wigner distribution (Eq. DISPLAY_FORM19), whose derivative with respect to $\Gamma $ (Eq. DISPLAY_FORM20) has critical points at $\Gamma =\pm {2x}$. In ROOT, the function is implemented as in (Listing SECREF18). linenos=false inline double breitwignerpdf(double x, double gamma, double x0 = 0) double gammahalf = gamma/2.0; return gammahalf/(MPI * ((x-x0)*(x-x0) + gammahalf*gammahalf)); listingBreit-Wigner PDF implementation in ROOT When evaluating the derivative of breitwignerpdf with respect to gamma at x=1, gamma=2, ND in ROOT the yields a result close to 0 with an absolute error of $10^{-13}$ despite the fact that the function is smooth and well-conditioned at this point. The approximation error becomes larger when the derivative is evaluated further from the critical point. In contrast, the automatic differentiation (in both modes) yields the exact result of 0. Results ::: Performance Section SECREF2 showed that reverse mode AD computes gradients in a single pass with a runtime complexity of at most $4 \cdot n$, which depends only on the complexity $n$ and not the dimensionality $dim$ of the original function. On the other hand, numerical differentiation requires a separate evaluation of the original function for every dimension to compute the entire gradient, making the overall the run-time complexity of gradient evaluation via central finite difference method $2 \cdot dim \cdot n$. Hence, in theory, reverse mode achieves an asymptotic speedup of $O(dim)$ over the numerical differentiation and can be up to $dim / 2$ times faster. We experimentally verify this by comparing the performance of gradient evaluation produced by reverse mode AD against our an implementation of numerical differentiation via the central finite difference method. We use the two functions in Listing SECREF21: sum, which computes the sum of all values in a vector; and mvn, which implements the PDF of a multivariate normal distribution. Both functions have a parameter dim which defines the dimension, and gradients are taken with respect to dim-dimensional vector p. While closed-form expressions of these gradients are well-known, these functions make a good basis of a benchmark as they perform typical operations that are commonly found inside more complicated functions (e.g. +, *, pow, exp inside loop). linenos=false double sum(double* p, int dim) double r = 0.0; for (int i = 0; i < dim; i++) r += p[i]; return r; linenos=false double mvn(double* x, double* p /*means*/, double sigma, int dim) double t = 0; for (int i = 0; i < dim; i++) t += (x[i] - p[i])*(x[i] - p[i]); t = -t / (2*sigma*sigma); return std::pow(2*MPI, -n/2.0) * std::pow(sigma, -0.5) * std::exp(t); listingImplementations of sum and mvn functions Gradients of sum produced by numerical differentiation and Clad are shown in Listing SECREF21. linenos=false double* sumnumgrad(double* p, int dim, double eps = 1e-8) double result = new double[dim]; for (int i = 0; i < dim; i++) double pi = p[i]; p[i] = pi + eps; double v1 = sum(p, dim); p[i] = pi - eps; double v2 = sum(p, dim); result[i] = (v1 - v2)/(2 * eps); p[i] = pi; return result; linenos=false void sumadgrad(double *p, int dim, double *result) double dr = 0; unsigned long t0; int di = 0; clad::tape<int> t1 = ; double r = 0.; t0 = 0; for (int i = 0; i < dim; i++) t0++; r += p[clad::push(t1, i)]; double sumreturn = r; dr += 1; for (; t0; t0–) double rd0 = dr; dr += rd0; result[clad::pop(t1)] += rd0; dr -= rd0; listingGradient of sum: (left) using finite differences, (right) generated by Clad We perform the evaluation for values of dim between 5 and 20480. Figure FIGREF22 shows the comparison for (a) sum; (b) mvn and confirms the expected theoretical speedup of $O(dim)$, with AD-generated gradient being $~dim/4$ times faster for sum and $~dim/25$ times faster for mvn (slowdown is due to more expensive operations like pow, exp). Results ::: Performance in TFormula Figure FIGREF26 shows the performance comparisons of reverse-mode AD and ND for the task of evaluating gradients of TFormula's builtin primitive probability density functions. The functions are gaus ($dim=3$), expo ($dim=2$), crystalball ($dim=5$), breitwigner ($dim=5$) and cheb2 ($dim=4$). Despite the low dimensionality ($dim \le 5$), AD gives significant (approx. 10x) speedups. The speedups are even larger than expected factor of $dim/2$ that follows from theoretical results, apparently due to additional overhead of the implementation of numerical differentiation in ROOT, which tries to find the optimal step size for its finite difference method to improve accuracy. In Figure FIGREF26, we perform fitting of a Gaussian distribution to a histogram of random samples via gradient-based optimization. In ROOT, this functionality is implemented in TFormula-based TF1 class. We can therefore use AD due to the integration of Clad into TFormula. Figure FIGREF26 compares the performance of the AD-based TF1 fitting with the numerical fitting in the Hist package. As in previous experiments, we show that AD scales better with problem dimensionality (number of histogram bins) on this task. The integration of Clad into TFormula makes it straightforward to use AD for fitting in ROOT. Conclusion We discussed our implementation of automatic differentiation in ROOT based on Clad. We demonstrated that Clad is integrated into ROOT and can be easily used in various contexts inside ROOT (e.g. histogram fitting). Furthermore, we showed that automatic differentiation in ROOT achieves significant improvements in accuracy and performance over numerical differentiation. The performance and accuracy are promising and encourage further work in the development of Clad and its integration in ROOT. Possible further improvements for Clad include optimizations to code transformation and design of a consistent interface for derivatives and gradients computation. This functionality can be further extended, including the computation of Jacobians and higher-order derivatives. In order to achieve optimal performance, the evaluation of individual derivatives could be executed in parallel. Besides, the Clad API should enable a flexible execution method based on the needs of its user. Acknowledgments This work has been supported by U.S. NSF grants PHY-1450377 and 1450323.
empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method)
e2cfaa2ec89b944bbc46e5edf7753b3018dbdc8f
e2cfaa2ec89b944bbc46e5edf7753b3018dbdc8f_0
Q: Is this AD implementation used in any deep learning framework? Text: Introduction Accurate and efficient computation of derivatives is vital for a wide variety of computing applications, including numerical optimization, solution of nonlinear equations, sensitivity analysis, and nonlinear inverse problems. Virtually every process could be described with a mathematical function, which can be thought of as an association between elements from different sets. Derivatives track how a varying quantity depends on another quantity, for example how the position of a planet varies as time varies. Derivatives and gradients (vectors of partial derivatives of multivariable functions) allow us to explore the properties of a function and thus the described process as a whole. Gradients are an essential component in gradient-based optimization methods, which have become more and more important in recent years, in particular with its application training of (deep) neural networks BIBREF0. Several different techniques are commonly used to compute the derivatives of a given function, either exactly or approximately BIBREF1, BIBREF0, BIBREF2. The most prevalent techniques are: Numerical differentiation, based on the finite difference method, provides a way to evaluate derivatives approximately. While simple, numerical differentiation can be slow (the run-time complexity grows linearly with the number of input variables) and may have problems with accuracy due to round-off and truncation errors. Symbolic differentiation, based on transformations of symbolic expressions of functions, provides exact closed-form expressions for the derivatives. It faces difficulties when the function to be differentiated is not available in a closed form, which is often the case for computer programs which may contain control flow. Symbolic differentiation can produce derivative expressions that are computationally expensive to evaluate due to difficulties in exploiting common subexpressions. Automatic differentiation (AD) computes derivatives accurately to the precision of the original function, supports control flow and uses at most a small constant factor more time and space than it takes to evaluate the original function, at the expense of increased implementation complexity and introducing more software dependencies. Numerical and symbolic differentiation methods are slow at computing gradients of functions with many input variables, as is often needed for gradient-based optimization algorithms. Both methods have problems calculating higher-order derivatives, where the complexity and errors due to numerical precision increase. Automatic differentiation largely avoids the problems of numerical and symbolic differentiation. In this paper, we describe the implementation of automatic differentiation techniques in ROOT, which is the data analysis framework broadly used High-Energy Physics BIBREF3. This implementation is based on Clad BIBREF4, BIBREF5, which is an automatic differentiation plugin for computation expressed in C/C++. Background Here, we briefly discuss main algorithmic and implementation principles behind AD. An in-depth overview and more formal description can be found in BIBREF1 and BIBREF2, respectively. Background ::: AD and its Modes AD is based on the decomposition of the procedure (e.g. a source code that computes the original function) into a sequence of simple mathematical operations (e.g. $+, -, *, /, \sin , \cos , \exp $) that can be expressed using a series of intermediate results. Subsequently, derivatives of every intermediate result are evaluated and combined via the chain rule of calculus to obtain the derivatives of the whole sequence. The control flow (e.g. branches, loops) can be incorporated by differentiating the control flow of the original function during the derivative evaluation. Two main modes of AD, which differ in the order of application of the chain rule, are used: Forward mode operates in a top-down approach and computes the derivative of every intermediate result with respect to a single selected input variable of the function. As soon as a final result of the function is reached, the partial derivative with respect to the selected input is available. A single evaluation of the forward mode can only compute partial derivatives with respect to a single input variable. Thus, when the whole gradient is required, forward mode must be invoked once per every input variable, leading to $m \cdot c_{F} \cdot n$ runtime complexity, where $m$ is the number of input variables, $n$ is the algorithmic complexity of the original function and $c_{F} < 3 $ is a small constant factor overhead of a single invocation of the forward mode BIBREF2. Reverse mode operates in a bottom-up approach and computes the derivative of a function's output with respect to every intermediate result. Once every input variable of the function is reached, the whole gradient of an output is available. Note that, independently on the number of input variables $N$, a single evaluation of the reverse mode is sufficient to get the whole gradient of a function's output, leading to $c_{R} \cdot n$ runtime complexity, where $n$ is the complexity of the original function and $c_{R} \le 4$ is a small constant factor overhead BIBREF2. This is a huge advantage in settings with a single scalar output and many inputs, which is often the case in machine-learning problems where $N >> 10^6$ that makes the forward mode infeasible. As a disadvantage, reverse mode implementations are more complicated, and dynamic memory allocations may be required when dynamic control flow is involved. Depending on the original function, this may cause a single evaluation of the reverse mode to be somewhat slower compared to a single evaluation of the forward mode. Background ::: AD Implementations AD techniques have been implemented in a variety of programming languages and paradigms, ranging from classical tools for Fortran BIBREF6 and C BIBREF7, to recent active work on tools specific to machine-learning applications BIBREF8, BIBREF9, and modern general-purpose programming languages BIBREF10, BIBREF11. We refer the reader to www.autodiff.org for a comprehensive list of available AD implementations for various languages. In particular, several implementations exist for C++, e.g. BIBREF12, BIBREF13, BIBREF14. Majority of implementations of AD fall into one of the two categories of implementation techniques: Tools based on operator overloading utilize features of programming languages like C++ and Python to define custom types and overload mathematical operators (e.g. +, -, *, /) and functions (e.g. $\exp , \sin , \cos $) on them. Such implementations are often based on custom AD-enabled types that wrap values of both the original and derivative functions and redefine operators to simultaneously act on original and derivative values. In C++, such tools are often implemented as a library that introduces templated differentiable types and corresponding mathematical operations. Then, functions called on the custom type return both original and derivative values. This is a powerful technique but has two primary limitations: legacy code and performance. Functions must be either polymorphic (templated) or explicitly defined on AD-enabled type to be differentiated. Differentiation of pre-existing source code using builtin types such as double and float is not possible. Users are required to use additional level of abstraction in the form of library-specific types instead of first-class language features. Moreover, the performance of the derivative generation can be suboptimal due to the C++ metaprogramming system which usually constructs deep template instantiation chains. Performance can be even more problematic when creating a higher order derivatives. Tools based on source transformation analyze the source code of the original function and build another source code for the derivative function. Such techniques typically accept and generate any code using built-in features of the original language and do not require custom libraries. On the other hand, they require an additional pass over the source file to analyze and generate derivative code. Source transformation can fully utilize source-level optimizations and has reasonably good performance. Implementation is more complicated and it is problematic to achieve full coverage of C++ language features. While full integration with a compiler can make AD a first-class language feature that is transparent for the user, most current implementations for C++ are based on custom parsers that do not have full coverage of the vast variety of C++ language constructs and require a separate step before compilation. Architecture and Implementation Automatic differentiation in ROOT is based on Clad BIBREF4, BIBREF5. Clad is a source transformation AD tool for C++. It is based on LLVM compiler infrastructure BIBREF15 and is implemented as a plugin for C++ compiler Clang, which allows Clad to be transparently integrated into the compilation phase and to utilize large parts of the compiler. Clad relies on Clang's parsing and code generation functionality and can differentiate complicated C++ constructs. Clad supports both forward and reverse mode. It is available as a standalone Clang plugin that, when attached to the compiler, produces derivatives in the compilation phase. On top of that, Clad is integrated directly into ROOT to provide AD functionality as an integral part of the framework. ROOT has a C++ interpreter Cling BIBREF16 which is built on the top of LLVM and Clang. This allows Clad to be attached to Cling as a plugin in a similar way as it can be attached to Clang. In this section, we discuss 1) architecture of Clad and its interaction with Cling; and 2) details of its integration into ROOT. Clad operates on Clang AST (abstract syntax tree) by analyzing the AST of the original function and generating the AST of the derivative. Clad provides two API functions: clad::differentiate for forward mode and clad::gradient for reverse mode, which can be used directly in the source code to mark a function for differentiation (see BIBREF5 for more details on usage and code examples). The information flow of interactions with Cling during differentiation (Figure FIGREF13) is: A function is marked for differentiation with the C++ construct clad::differentiate or clad::gradient (step 1). Cling in ROOT performs incremental compilation and receives an abstract syntax tree (AST) representation of the code (step 2). Cling detects the differentiation marker and sends the AST of the original function to Clad, which transforms the AST to produce the AST of the derivative (step 3). Clad returns the derivative AST to Cling for code generation and execution by the low level LLVM primitives (steps 4, 5, 6, 7). Alternatively, if Clad was configured for non-interactive use, the generated AST can be converted to a C++ source code and written to a text file. The generated code then can be compiled with any C++ compiler (steps 8, 9). Inside of ROOT, interface functions clad::differentiate and clad::gradient are accessible via include <Math/CladDerivator.h>. Clad is also directly integrated into the TFormula class that encapsulates the concept of multidimensional mathematical functions in ROOT. TFormula is a primitive in ROOT's math package which is connected to the Cling interpreter. In the context of TFormula, Clad can differentiate functions available in the interpreter. The TFormula::GenerateGradientPar method uses Clad to differentiate the underlying code of the formula with respect to its parameters and generate the code for the gradient. TFormula::GradientPar method then evaluates the gradient at a specified point. Results In this section, we empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method) in ROOT. We show that AD can drastically improve accuracy and performance of derivative evaluation, compared to ND. Results ::: Accuracy As stated in Section SECREF1, numerical differentiation may give imprecise results while AD computes the derivatives exactly. We show an example of a function where this difference is apparent: AD provides exact result while ND suffers from the loss of accuracy. 2 The function is the PDF of Breit-Wigner distribution (Eq. DISPLAY_FORM19), whose derivative with respect to $\Gamma $ (Eq. DISPLAY_FORM20) has critical points at $\Gamma =\pm {2x}$. In ROOT, the function is implemented as in (Listing SECREF18). linenos=false inline double breitwignerpdf(double x, double gamma, double x0 = 0) double gammahalf = gamma/2.0; return gammahalf/(MPI * ((x-x0)*(x-x0) + gammahalf*gammahalf)); listingBreit-Wigner PDF implementation in ROOT When evaluating the derivative of breitwignerpdf with respect to gamma at x=1, gamma=2, ND in ROOT the yields a result close to 0 with an absolute error of $10^{-13}$ despite the fact that the function is smooth and well-conditioned at this point. The approximation error becomes larger when the derivative is evaluated further from the critical point. In contrast, the automatic differentiation (in both modes) yields the exact result of 0. Results ::: Performance Section SECREF2 showed that reverse mode AD computes gradients in a single pass with a runtime complexity of at most $4 \cdot n$, which depends only on the complexity $n$ and not the dimensionality $dim$ of the original function. On the other hand, numerical differentiation requires a separate evaluation of the original function for every dimension to compute the entire gradient, making the overall the run-time complexity of gradient evaluation via central finite difference method $2 \cdot dim \cdot n$. Hence, in theory, reverse mode achieves an asymptotic speedup of $O(dim)$ over the numerical differentiation and can be up to $dim / 2$ times faster. We experimentally verify this by comparing the performance of gradient evaluation produced by reverse mode AD against our an implementation of numerical differentiation via the central finite difference method. We use the two functions in Listing SECREF21: sum, which computes the sum of all values in a vector; and mvn, which implements the PDF of a multivariate normal distribution. Both functions have a parameter dim which defines the dimension, and gradients are taken with respect to dim-dimensional vector p. While closed-form expressions of these gradients are well-known, these functions make a good basis of a benchmark as they perform typical operations that are commonly found inside more complicated functions (e.g. +, *, pow, exp inside loop). linenos=false double sum(double* p, int dim) double r = 0.0; for (int i = 0; i < dim; i++) r += p[i]; return r; linenos=false double mvn(double* x, double* p /*means*/, double sigma, int dim) double t = 0; for (int i = 0; i < dim; i++) t += (x[i] - p[i])*(x[i] - p[i]); t = -t / (2*sigma*sigma); return std::pow(2*MPI, -n/2.0) * std::pow(sigma, -0.5) * std::exp(t); listingImplementations of sum and mvn functions Gradients of sum produced by numerical differentiation and Clad are shown in Listing SECREF21. linenos=false double* sumnumgrad(double* p, int dim, double eps = 1e-8) double result = new double[dim]; for (int i = 0; i < dim; i++) double pi = p[i]; p[i] = pi + eps; double v1 = sum(p, dim); p[i] = pi - eps; double v2 = sum(p, dim); result[i] = (v1 - v2)/(2 * eps); p[i] = pi; return result; linenos=false void sumadgrad(double *p, int dim, double *result) double dr = 0; unsigned long t0; int di = 0; clad::tape<int> t1 = ; double r = 0.; t0 = 0; for (int i = 0; i < dim; i++) t0++; r += p[clad::push(t1, i)]; double sumreturn = r; dr += 1; for (; t0; t0–) double rd0 = dr; dr += rd0; result[clad::pop(t1)] += rd0; dr -= rd0; listingGradient of sum: (left) using finite differences, (right) generated by Clad We perform the evaluation for values of dim between 5 and 20480. Figure FIGREF22 shows the comparison for (a) sum; (b) mvn and confirms the expected theoretical speedup of $O(dim)$, with AD-generated gradient being $~dim/4$ times faster for sum and $~dim/25$ times faster for mvn (slowdown is due to more expensive operations like pow, exp). Results ::: Performance in TFormula Figure FIGREF26 shows the performance comparisons of reverse-mode AD and ND for the task of evaluating gradients of TFormula's builtin primitive probability density functions. The functions are gaus ($dim=3$), expo ($dim=2$), crystalball ($dim=5$), breitwigner ($dim=5$) and cheb2 ($dim=4$). Despite the low dimensionality ($dim \le 5$), AD gives significant (approx. 10x) speedups. The speedups are even larger than expected factor of $dim/2$ that follows from theoretical results, apparently due to additional overhead of the implementation of numerical differentiation in ROOT, which tries to find the optimal step size for its finite difference method to improve accuracy. In Figure FIGREF26, we perform fitting of a Gaussian distribution to a histogram of random samples via gradient-based optimization. In ROOT, this functionality is implemented in TFormula-based TF1 class. We can therefore use AD due to the integration of Clad into TFormula. Figure FIGREF26 compares the performance of the AD-based TF1 fitting with the numerical fitting in the Hist package. As in previous experiments, we show that AD scales better with problem dimensionality (number of histogram bins) on this task. The integration of Clad into TFormula makes it straightforward to use AD for fitting in ROOT. Conclusion We discussed our implementation of automatic differentiation in ROOT based on Clad. We demonstrated that Clad is integrated into ROOT and can be easily used in various contexts inside ROOT (e.g. histogram fitting). Furthermore, we showed that automatic differentiation in ROOT achieves significant improvements in accuracy and performance over numerical differentiation. The performance and accuracy are promising and encourage further work in the development of Clad and its integration in ROOT. Possible further improvements for Clad include optimizations to code transformation and design of a consistent interface for derivatives and gradients computation. This functionality can be further extended, including the computation of Jacobians and higher-order derivatives. In order to achieve optimal performance, the evaluation of individual derivatives could be executed in parallel. Besides, the Clad API should enable a flexible execution method based on the needs of its user. Acknowledgments This work has been supported by U.S. NSF grants PHY-1450377 and 1450323.
Unanswerable
22c36082b00f677e054f0f0395ed685808965a02
22c36082b00f677e054f0f0395ed685808965a02_0
Q: Do they conduct any human evaluation? Text: Introduction The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence. Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6. Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions). In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string. We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01. Background Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization. Background ::: Transformer Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\text{PE}$): for $i=1,\ldots ,d/2$. Background ::: Length encoding in summarization Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length. Methods We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length. Methods ::: Length Token Method Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\text{min}$ and $t_\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\text{min}$ and $t_\text{max}$ are in the normal group, the ones with ratio below $t_\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group. Methods ::: Length Encoding Method Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length: where $i=1,\ldots ,d/2$. Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers: where $q_N: [0, 1] \rightarrow \lbrace 0, 1, .., N\rbrace $ is simply defined as $q_N(x) = \lfloor {x \times N}\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9. Methods ::: Combining the two methods We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length. Methods ::: Fine-Tuning for length control Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences. Experiments ::: Data and Settings Our experiments are run using the English$\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder. In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules. For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding. Experiments ::: Models We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29. Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies. Experiments ::: Evaluation To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations. The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$). Results We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios. Results ::: Small Data condition The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00. Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality. Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness. Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences. Results ::: Large data condition Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\text{src}$ and LR$^\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal. Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\text{src}$=1.05), which are also much shorter than the reference (LR$^\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality. Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline. Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal. Controlling output length. In order to achieve LR$^\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11). Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\sim 0.11$ for relative encoding and $\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used. Results ::: Human Evaluation and Analysis After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$). Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set. Related works As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT. In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9. The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35. Conclusion In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse.
Yes
85a7dbf6c2e21bfb7a3a938381890ac0ec2a19e0
85a7dbf6c2e21bfb7a3a938381890ac0ec2a19e0_0
Q: What dataset do they use for experiments? Text: Introduction The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence. Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6. Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions). In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string. We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01. Background Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization. Background ::: Transformer Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\text{PE}$): for $i=1,\ldots ,d/2$. Background ::: Length encoding in summarization Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length. Methods We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length. Methods ::: Length Token Method Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\text{min}$ and $t_\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\text{min}$ and $t_\text{max}$ are in the normal group, the ones with ratio below $t_\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group. Methods ::: Length Encoding Method Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length: where $i=1,\ldots ,d/2$. Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers: where $q_N: [0, 1] \rightarrow \lbrace 0, 1, .., N\rbrace $ is simply defined as $q_N(x) = \lfloor {x \times N}\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9. Methods ::: Combining the two methods We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length. Methods ::: Fine-Tuning for length control Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences. Experiments ::: Data and Settings Our experiments are run using the English$\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder. In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules. For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding. Experiments ::: Models We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29. Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies. Experiments ::: Evaluation To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations. The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$). Results We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios. Results ::: Small Data condition The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00. Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality. Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness. Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences. Results ::: Large data condition Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\text{src}$ and LR$^\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal. Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\text{src}$=1.05), which are also much shorter than the reference (LR$^\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality. Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline. Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal. Controlling output length. In order to achieve LR$^\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11). Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\sim 0.11$ for relative encoding and $\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used. Results ::: Human Evaluation and Analysis After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$). Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set. Related works As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT. In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9. The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35. Conclusion In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse.
English$\rightarrow $Italian/German portions of the MuST-C corpus, As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)
90bc60320584ebba11af980ed92a309f0c1b5507
90bc60320584ebba11af980ed92a309f0c1b5507_0
Q: How do they enrich the positional embedding with length information Text: Introduction The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence. Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6. Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions). In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string. We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01. Background Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization. Background ::: Transformer Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\text{PE}$): for $i=1,\ldots ,d/2$. Background ::: Length encoding in summarization Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length. Methods We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length. Methods ::: Length Token Method Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\text{min}$ and $t_\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\text{min}$ and $t_\text{max}$ are in the normal group, the ones with ratio below $t_\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group. Methods ::: Length Encoding Method Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length: where $i=1,\ldots ,d/2$. Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers: where $q_N: [0, 1] \rightarrow \lbrace 0, 1, .., N\rbrace $ is simply defined as $q_N(x) = \lfloor {x \times N}\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9. Methods ::: Combining the two methods We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length. Methods ::: Fine-Tuning for length control Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences. Experiments ::: Data and Settings Our experiments are run using the English$\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder. In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules. For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding. Experiments ::: Models We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29. Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies. Experiments ::: Evaluation To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations. The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$). Results We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios. Results ::: Small Data condition The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00. Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality. Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness. Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences. Results ::: Large data condition Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\text{src}$ and LR$^\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal. Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\text{src}$=1.05), which are also much shorter than the reference (LR$^\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality. Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline. Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal. Controlling output length. In order to achieve LR$^\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11). Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\sim 0.11$ for relative encoding and $\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used. Results ::: Human Evaluation and Analysis After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$). Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set. Related works As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT. In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9. The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35. Conclusion In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse.
They introduce new trigonometric encoding which besides information about position uses additional length information (abs or relative).
f52b2ca49d98a37a6949288ec5f281a3217e5ae8
f52b2ca49d98a37a6949288ec5f281a3217e5ae8_0
Q: How do they condition the output to a given target-source class? Text: Introduction The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence. Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6. Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions). In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string. We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01. Background Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization. Background ::: Transformer Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\text{PE}$): for $i=1,\ldots ,d/2$. Background ::: Length encoding in summarization Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length. Methods We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length. Methods ::: Length Token Method Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\text{min}$ and $t_\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\text{min}$ and $t_\text{max}$ are in the normal group, the ones with ratio below $t_\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group. Methods ::: Length Encoding Method Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length: where $i=1,\ldots ,d/2$. Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers: where $q_N: [0, 1] \rightarrow \lbrace 0, 1, .., N\rbrace $ is simply defined as $q_N(x) = \lfloor {x \times N}\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9. Methods ::: Combining the two methods We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length. Methods ::: Fine-Tuning for length control Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences. Experiments ::: Data and Settings Our experiments are run using the English$\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder. In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules. For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding. Experiments ::: Models We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29. Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies. Experiments ::: Evaluation To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations. The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$). Results We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios. Results ::: Small Data condition The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00. Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality. Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness. Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences. Results ::: Large data condition Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\text{src}$ and LR$^\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal. Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\text{src}$=1.05), which are also much shorter than the reference (LR$^\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality. Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline. Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal. Controlling output length. In order to achieve LR$^\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11). Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\sim 0.11$ for relative encoding and $\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used. Results ::: Human Evaluation and Analysis After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$). Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set. Related works As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT. In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9. The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35. Conclusion In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse.
They use three groups short/normal/long translation classes to learn length token, which is in inference used to bias network to generate desired length group.
228425783a4830e576fb98696f76f4c7c0a1b906
228425783a4830e576fb98696f76f4c7c0a1b906_0
Q: Which languages do they focus on? Text: Introduction The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence. Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6. Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions). In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string. We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01. Background Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization. Background ::: Transformer Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\text{PE}$): for $i=1,\ldots ,d/2$. Background ::: Length encoding in summarization Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length. Methods We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length. Methods ::: Length Token Method Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\text{min}$ and $t_\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\text{min}$ and $t_\text{max}$ are in the normal group, the ones with ratio below $t_\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group. Methods ::: Length Encoding Method Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length: where $i=1,\ldots ,d/2$. Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers: where $q_N: [0, 1] \rightarrow \lbrace 0, 1, .., N\rbrace $ is simply defined as $q_N(x) = \lfloor {x \times N}\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9. Methods ::: Combining the two methods We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length. Methods ::: Fine-Tuning for length control Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences. Experiments ::: Data and Settings Our experiments are run using the English$\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder. In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules. For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding. Experiments ::: Models We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29. Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies. Experiments ::: Evaluation To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations. The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$). Results We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios. Results ::: Small Data condition The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00. Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality. Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness. Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences. Results ::: Large data condition Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\text{src}$ and LR$^\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal. Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\text{src}$=1.05), which are also much shorter than the reference (LR$^\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality. Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline. Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal. Controlling output length. In order to achieve LR$^\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11). Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\sim 0.11$ for relative encoding and $\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used. Results ::: Human Evaluation and Analysis After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$). Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set. Related works As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT. In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9. The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35. Conclusion In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse.
two translation directions (En-It and En-De)
9d1135303212356f3420ed010dcbe58203cc7db4
9d1135303212356f3420ed010dcbe58203cc7db4_0
Q: What dataset do they use? Text: Introduction The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence. Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6. Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions). In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string. We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01. Background Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization. Background ::: Transformer Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\text{PE}$): for $i=1,\ldots ,d/2$. Background ::: Length encoding in summarization Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length. Methods We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length. Methods ::: Length Token Method Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\text{min}$ and $t_\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\text{min}$ and $t_\text{max}$ are in the normal group, the ones with ratio below $t_\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group. Methods ::: Length Encoding Method Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length: where $i=1,\ldots ,d/2$. Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers: where $q_N: [0, 1] \rightarrow \lbrace 0, 1, .., N\rbrace $ is simply defined as $q_N(x) = \lfloor {x \times N}\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9. Methods ::: Combining the two methods We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length. Methods ::: Fine-Tuning for length control Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences. Experiments ::: Data and Settings Our experiments are run using the English$\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder. In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules. For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding. Experiments ::: Models We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29. Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies. Experiments ::: Evaluation To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations. The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$). Results We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios. Results ::: Small Data condition The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00. Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality. Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness. Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences. Results ::: Large data condition Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\text{src}$ and LR$^\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal. Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\text{src}$=1.05), which are also much shorter than the reference (LR$^\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality. Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline. Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal. Controlling output length. In order to achieve LR$^\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11). Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\sim 0.11$ for relative encoding and $\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used. Results ::: Human Evaluation and Analysis After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$). Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set. Related works As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT. In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9. The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35. Conclusion In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse.
English$\rightarrow $Italian/German portions of the MuST-C corpus, As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)
d8bf4a29c7af213a9a176eb1503ec97d01cc8f51
d8bf4a29c7af213a9a176eb1503ec97d01cc8f51_0
Q: Do they experiment with combining both methods? Text: Introduction The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence. Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6. Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions). In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string. We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01. Background Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization. Background ::: Transformer Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\text{PE}$): for $i=1,\ldots ,d/2$. Background ::: Length encoding in summarization Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length. Methods We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length. Methods ::: Length Token Method Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\text{min}$ and $t_\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\text{min}$ and $t_\text{max}$ are in the normal group, the ones with ratio below $t_\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group. Methods ::: Length Encoding Method Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length: where $i=1,\ldots ,d/2$. Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers: where $q_N: [0, 1] \rightarrow \lbrace 0, 1, .., N\rbrace $ is simply defined as $q_N(x) = \lfloor {x \times N}\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9. Methods ::: Combining the two methods We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length. Methods ::: Fine-Tuning for length control Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences. Experiments ::: Data and Settings Our experiments are run using the English$\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder. In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules. For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding. Experiments ::: Models We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29. Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies. Experiments ::: Evaluation To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations. The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$). Results We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios. Results ::: Small Data condition The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00. Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality. Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness. Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences. Results ::: Large data condition Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\text{src}$ and LR$^\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal. Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\text{src}$=1.05), which are also much shorter than the reference (LR$^\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality. Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline. Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal. Controlling output length. In order to achieve LR$^\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11). Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\sim 0.11$ for relative encoding and $\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used. Results ::: Human Evaluation and Analysis After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$). Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set. Related works As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT. In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9. The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35. Conclusion In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse.
Yes
73abb173a3cc973ab229511cf53b426865a2738b
73abb173a3cc973ab229511cf53b426865a2738b_0
Q: What state-of-the-art models are compared against? Text: Introduction The field of autonomous dialog systems is rapidly growing with the spread of smart mobile devices but it still faces many challenges to become the primary user interface for natural interaction through conversations. Indeed, when dialogs are conducted in noisy environments or when utterances themselves are noisy, correctly recognizing and understanding user utterances presents a real challenge. In the context of call-centers, efficient automation has the potential to boost productivity through increasing the probability of a call's success while reducing the overall cost of handling the call. One of the core components of a state-of-the-art dialog system is a dialog state tracker. Its purpose is to monitor the progress of a dialog and provide a compact representation of past user inputs and system outputs represented as a dialog state. The dialog state encapsulates the information needed to successfully finish the dialog, such as users' goals or requests. Indeed, the term “dialog state” loosely denotes an encapsulation of user needs at any point in a dialog. Obviously, the precise definition of the state depends on the associated dialog task. An effective dialog system must include a tracking mechanism which is able to accurately accumulate evidence over the sequence of turns of a dialog, and it must adjust the dialog state according to its observations. In that sense, it is an essential componant of a dialog systems. However, actual user utterances and corresponding intentions are not directly observable due to errors from Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU), making it difficult to infer the true dialog state at any time of a dialog. A common method of modeling a dialog state is through the use of a slot-filling schema, as reviewed in BIBREF0 . In slot-filling, the state is composed of a predefined set of variables with a predefined domain of expression for each of them. The goal of the dialog system is to efficiently instantiate each of these variables thereby performing an associated task and satisfying the corresponding intent of the user. Various approaches have been proposed to define dialog state trackers. The traditional methods used in most commercial implementations use hand-crafted rules that typically rely on the most likely result from an NLU module as described in BIBREF1 . However, these rule-based systems are prone to frequent errors as the most likely result is not always the correct one. Moreover, these systems often force the human customer to respond using simple keywords and to explicitly confirm everything they say, creating an experience that diverges considerably from the natural conversational interaction one might hope to achieve as recalled in BIBREF2 . More recent methods employ statistical approaches to estimate the posterior distribution over the dialog states allowing them to represent the uncertainty of the results of the NLU module. Statistical dialog state trackers are commonly categorized into one of two approaches according to how the posterior probability distribution over the state calculation is defined. In the first type, the generative approach uses a generative model of the dialog dynamic that describes how the sequence of utterances are generated by using the hidden dialog state and using Bayes' rule to calculate the posterior distribution of the state. It has been a popular approach for statistical dialog state tracking, since it naturally fits into the Partially Observable Markov Decision Process (POMDP) models as described in BIBREF3 , which is an integrated model for dialog state tracking and dialog strategy optimization. Using this generic formalism of sequential decision processes, the task of dialog state tracking is to calculate the posterior distribution over an hidden state given an history of observations. In the second type, the discriminative approach models the posterior distribution directly through a closed algebraic formulation as a loss minimization problem. Statistical dialog systems, in maintaining a distribution over multiple hypotheses of the true dialog state, are able to behave robustly even in the face of noisy conditions and ambiguity. In this paper, a statistical type of approach of state tracking is proposed by leveraging the recent progress of spectral decomposition methods formalized as bilinear algebraic decomposition and associated inference procedures. The proposed model estimates each state transition with respect to a set of observations and is able to compute the state transition through an inference procedure with a linear complexity with respect to the number of variables and observations. Roadmap: This paper is structured as follows, Section "Generative Dialog State Tracking" formally defines transactional dialogs and describes the associated problem of statistical dialog state tracking with both the generative and discriminative approaches. Section "Spectral decomposition model for state tracking in slot-filling dialogs" depicts the proposed decompositional model for coupled and temporal hidden variable models and the associated inference procedure based on Collective Matrix Factorization (CMF). Finally, Section "Experimental settings and Evaluation" illustrates the approach with experimental results obtained using a state of the art benchmark for dialog state tracking. Transactional dialog state tracking The dialog state tracking task we consider in this paper is formalized as follows: at each turn of a task-oriented dialog between a dialog system and a user, the dialog system chooses a dialog act $d$ to express and the user answers with an utterance $u$ . The dialog state at each turn of a given dialog is defined as a distribution over a set of predefined variables, which define the structure of the state as mentioned in BIBREF4 . This classic state structure is commonly called slot filling and the associated dialogs are commonly referred to as transactional. Indeed, in this context, the state tracking task consists of estimating the value of a set of predefined variables in order to perform a procedure or transaction which is, in fact, the purpose of the dialog. Typically, the NLU module processes the user utterance and generates an N-best list $o = \lbrace <d_1, f_1>, \ldots , <d_n, f_n>\rbrace $ , where $d_i$ is the hypothesized user dialog act and $f_i$ is its confidence score. In the simplest case where no ASR and NLU modules are employed, as in a text based dialog system as proposed in BIBREF5 the utterance is taken as the observation using a so-called bag of words representation. If an NLU module is available, standardized dialog act schemas can be considered as observations as in BIBREF6 . Furthermore, if prosodic information is available by the ASR component of the dialog system as in BIBREF7 , it can also be considered as part of the observation definition. A statistical dialog state tracker maintains, at each discrete time step $t$ , the probability distribution over states, $b(s_t)$ , which is the system's belief over the state. The general process of slot-filling, transactional dialog management is summarized in Figure 1 . First, intent detection is typically an NLU problem consisting of identifying the task the user wants the system to accomplish. This first step determines the set of variables to instantiate during the second step, which is the slot-filling process. This type of dialog management assumes that a set of variables are required for each predefined intention. The slot filling process is a classic task of dialog management and is composed of the cyclic tasks of information gathering and integration, in other words – dialog state tracking. Finally, once all the variables have been correctly instantiated, a common practice in dialog systems is to perform a last general confirmation of the task desired by the user before finally executing the requested task. As an example used as illutration of the proposed method in this paper, in the case of the DSTC-2 challenge, presented in BIBREF8 , the context was taken from the restaurant information domain and the considered variables to instanciate as part of the state are {Area (5 possible values) ; FOOD (91 possible values) ; Name (113 possible values) ; Pricerange (3 possible values)}. In such framework, the purpose is to estimate as early as possible in the course of a given dialog the correct instantiation of each variable. In the following, we will assume the state is represented as a concatenation of zero-one encoding of the values for each variable defining the state. Furthermore, in the context of this paper, only the bag of words has been considered as an observation at a given turn but dialog acts or detected named entity provided by an SLU module could have also been incorporated as evidence. Two statistical approaches have been considered for maintaining the distribution over a state given sequential NLU output. First, the discriminative approach aims to model the posterior probability distribution of the state at time $t+1$ with regard to state at time $t$ and observations $z_{1:t}$ . Second, the generative approach attempts to model the transition probability and the observation probability in order to exploit possible interdependencies between hidden variables that comprise the dialog state. Generative Dialog State Tracking A generative approach to dialog state tracking computes the belief over the state using Bayes' rule, using the belief from the last turn $b(s_{t-1})$ as a prior and the likelihood given the user utterance hypotheses $p(z_t|s_t)$ , with $z_t$ the observation gathered at time $t$ . In the prior work BIBREF4 , the likelihood is factored and some independence assumptions are made: $$b_t \propto \sum _{s_{t-1},z_t} p(s_t|z_t, d_{t-1}, s_{t-1}) p(z_t|s_t) b(s_{t-1})$$ (Eq. 3) Figure 2 depicts a typical generative model of a dialog state tracking process using a factorial hidden Markov model proposed by BIBREF9 . The shaded variables are the observed dialog turns and each unshaded variable represents a single variable describing the task dependent variables. In this family of approaches, scalability is considered as one of the main issues. One way to reduce the amount of computation is to group the states into partitions, as proposed in the Hidden Information State (HIS) model of BIBREF10 . Other approaches to cope with the scalability problem in dialog state tracking is to adopt a factored dynamic Bayesian network by making conditional independence assumptions among dialog state components, and then using approximate inference algorithms such as loopy belief propagation as proposed in BIBREF11 or a blocked Gibbs sampling as in BIBREF12 . To cope with such limitations, discriminative methods of state tracking presented in the next part of this section aim at directly model the posterior distribution of the tracked state using a choosen parametric form. Discriminative Dialog State Tracking The discriminative approach of dialog state tracking computes the belief over a state via a trained parametric model that directly represents the belief $b(s_{t+1}) = p(s_{s+1} | s_t, z_t)$ . Maximum Entropy has been widely used in the discriminative approach as described in BIBREF13 . It formulates the belief as follows: $$b(s) = P(s|x) = \eta .e^{w^T\phi (x,s)}$$ (Eq. 6) where $\eta $ is the normalizing constant, $x = (d^u_1, d^m_1, s_1, \dots , d^u_t, d^m_t, s_t)$ is the history of user dialog acts, $d^u_i, i \in \lbrace 1,\ldots ,t\rbrace $ , the system dialog acts, $d^m_i, i \in \lbrace 1,\ldots ,t\rbrace $ , and the sequence of states leading to the current dialog turn at time $t$ . Then, $\phi (.)$ is a vector of feature functions on $x$ and $s$ , and finally, $w$ is the set of model parameters to be learned from annotated dialog data. According to the formulation, the posterior computation has to be carried out for all possible state realizations in order to obtain the normalizing constant $\eta $ . This is not feasible for real dialog domains, which can have a large number of variables and possible variable instantiations. So, it is vital to the discriminative approach to reduce the size of the state space. For example, BIBREF13 proposes to restrict the set of possible state variables to those that appeared in NLU results. More recently, BIBREF14 assumes conditional independence between dialog state variables to address scalability issues and uses a conditional random field to track each variable separately. Finally, deep neural models, performing on a sliding window of features extracted from previous user turns, have also been proposed in BIBREF15 . Of the current literature, this family of approaches have proven to be the most efficient for publicly available state tracking datasets. In the next section, we present a decompositional approach of dialog state tracking that aims at reconciling the two main approaches of the state of the art while leveraging on the current advances of low-rank bilinear decomposition models, as recalled in BIBREF16 , that seems particularly adapted to the sparse nature of dialog state tracking tasks. Spectral decomposition model for state tracking in slot-filling dialogs In this section, the proposed model is presented and the learning and prediction procedures are detailed. The general idea consists in the decomposition of a matrix $M$ , composed of a set of turn's transition as rows and sparse encoding of the corresponding feature variables as columns. More precisely, a row of $M$ is composed with the concatenation of the sparse representation of (1) $s_{t}$ , a state at time $t$ (2) $s_{t+1}$ , a state at time $t+1$ (3) $z_t$ , a set of feature representating the observation. In the considered context, the bag of words composing the current turn is chosen as the observation. The parameter learning procedure is formalized as a matrix decomposition task solved through Alternating Least Square Ridge regression. The ridge regression task allows for an asymmetric penalization of the targeted variables of the state tracking task to perform. Figure 3 illustrates the collective matrix factorization task that constitutes the learning procedure of the state tracking model. The model introduces the component of the decomposed matrix to the form of latent variables $\lbrace A, B, C\rbrace $ , also called embeddings. In the next section, the learning procedure from dialog state transition data and the proper tracking algorithm are described. In other terms, each row of the matrix corresponds to the concatenation of a "one-hot" representation of a state description at time $t$ and a dialog turn at time $t$ and each column of the overall matrix $M$0 corresponds to a consider feature respectively of the state and dialog turn. Such type of modelization of the state tracking problem presents several advantages. First, the model is particularly flexible, the definition of the state and observation spaces are independent of the learning and prediction models and can be adapted to the context of tracking. Second, a bias by data can be applied in order to condition the transition model w.r.t separated matrices to decompose jointly as often proposed in multi-task learning as described in BIBREF17 and collective matrix factorization as detailed in BIBREF18 . Finally, the decomposition method is fast and parallelizable because it mainly leverages on core methods of linear algebra. From our knowledge, this proposition is the first attend to formalize and solve the state tracking task using a matrix decomposition approach. Learning method For the sake of simplicity, the $\lbrace B,C\rbrace $ matrices are concatenated to $E$ , and $M$ is the concatenation of the matrices $\lbrace S_t,S_{t+1},Z_t\rbrace $ depicted in Figure 3 . Equation 9 defines the optimization task, i.e. the loss function, associated with the learning problem of latent variable search $\lbrace A,E\rbrace $ . $$\min _{A,E} || (M - AE ) W||_2^2 + \lambda _a ||A||_2^2 + \lambda _b ||E||_2^2 \hspace{5.0pt},$$ (Eq. 9) where $\lbrace \lambda _a, \lambda _b\rbrace \in \mathbb {R}^2$ are regularization hyper-parameters and $W$ is a diagonal matrix that increases the weight of the state variables, $s_{t+1}$ in order bias the resulting parameters $\lbrace A,E\rbrace $ toward better predictive accuracy on these specific variables. This type of weighting approach has been shown to be as efficient in comparable generative discriminative trade-off tasks as mentioned in BIBREF19 and BIBREF20 . An Alternating Least Squares method that is a sequence of two convex optimization problems is used in order to perform the minimization task. First, for known $E$ , compute: $$A^* = \operatornamewithlimits{arg\,min}_{A} || (M - AE ) W ||_2^2 + \lambda _a ||A||_2^2 \hspace{5.0pt},$$ (Eq. 10) then for a given $A$ , $$E^* = \operatornamewithlimits{arg\,min}_{E} || (M - AE) W ||_2^2 + \lambda _b ||E||_2^2$$ (Eq. 11) By iteratively solving these two optimization problems, we obtain the following fixed-point regularized and weighted alternating least square algorithms where $t$ correspond to the current step of the overall iterative process: $$A_{t+1} \leftarrow (E_{t}^TWE_{t} + \lambda _a\mathbb {I})^{-1}E_{t}^TWM$$ (Eq. 12) $$E_{t+1} \leftarrow (A_{t}^TA_{t} + \lambda _b\mathbb {I})^{-1}A_{t}^TM$$ (Eq. 13) As presented in Equation 12 , the $W$ matrix is only involved for the updating of $A$ because only the subset of the columns of $E$ , representing the features of the state to predict, are weighted differently in order to increase the importancd of the corresponding columns in the loss function. For the optimization of the latent representation composing $E$ , presented in Equation 13 , each call session's embeddings stored in $A$ hold the same weight, so in this second step of the algorithm, $W$ is actually an identity matrix and so does not appear. Prediction method The prediction process consists of (1) computing the embedding of a current transition by solving the corresponding least square problem based on the two variables $\lbrace s_t,z_t\rbrace $ that correspond to our current knowledge of the state at time $t$ and the set of observations extracted from the last turn that is composed with the system and user utterances, (2) estimating the missing values of interest, i.e. the likelihood of each value of each variable that constitutes the state at time $(t+1)$ , $s_{t+1}$ , by computing the cross-product between the transition embedding calculated in (1) and the corresponding column embeddings of $E$ , and of the value of each variable of $s_{t+1}$ . More precisely, we write this decomposition as $$M = A.E^T$$ (Eq. 15) where $M$ is the matrix of data to decompose and $.$ the matrix-matrix product operator. As in the previous section, $A$ has a row for each transition embedding, and $E$ has a column for each variable-value embedding in the form of a zero-one encoding. When a new row of observations $m_i$ for a new set of variables state $s_i$ and observations $z_i$ and $E$ is fixed, the purpose of the prediction task is to find the row $a_i$ of $A$ such that: $$a_i.E^T \approx m^T_i$$ (Eq. 16) Even if it is generally difficult to require these to be equal, we can require that these last elements have the same projection into the latent space: $$a_i^T.E^T.E = m_i^T.E$$ (Eq. 17) Then, the classic closed form solution of a linear regression task can be derived: $$a_i^T = m_i^T.E.(E^T.E)^{-1} \\ a_i = (E^T.E)^{-1}.E^T.m_i$$ (Eq. 18) In fact, Equation 18 is the optimal value of the embedding of the transition $m_i$ , assuming a quadratic loss is used. Otherwise it is an approximation, in the case of a matrix decomposition of $M$ using a logistic loss for example. Note that, in equation 18 , $ (E^T.E)^{-1}$ requires a matrix inversion, but for a low dimensional matrix (the size of the latent space). Several advantages can be identified in this approach. First, at learning time, alternative ridge regression is computationally efficient because a closed form solution exists at each step of the optimization process employed to infer the parameters, i.e the low rank matrices, of the model. Second, at decision time, the state tracking procedure consists of (1) computing the embedding $a$ of the current transition using the current state estimation $s_t$ and the current observation set $z_t$ and (2) computing the distribution over the state defined as a vector-matrix product between $a$ and the latent matrix $E$ . Finally, this inference method can be partially associated to the general technique of matrix completion. But, a proper matrix completion task would have required a matrix $M$ with missing value corresponding to the exhausive list of the possible triples ${s_t, s_{t+1}, z_t}$ , which is obviously intractable to represent and decompose. Experimental settings and Evaluation In a first section, the dialog domain used for the evaluation of our dialog tracker is described and the different probability models used for the domain. In a second section, we present a first set of experimental results obtained through the proposed approach and its comparison to several reported results of approaches of the state of the art. Restaurant information domain We used the DSTC-2 dialog domain as described in BIBREF21 in which the user queries a database of local restaurants by interacting with a dialog system. The dataset for the restaurant information domain were originally collected using Amazon Mechanical Turk. A usual dialog proceeds as follows: first, the user specifies his personal set of constraints concerning the restaurant he looks for. Then, the system offers the name of a restaurant that satisfies the constraints. User then accepts the offer, and requests for additional information about accepted restaurant. The dialog ends when all the information requested by the user are provided. In this context, the dialog state tracker should be able to track several types of information that composes the state like the geographic area, the food type, the name and the price range slots. In this paper, we restrict ourselves to tracking these variables, but our tracker can be easily setup to track others as well if they are properly specified. The dialog state tracker updates its belief turn by turn, receiving evidence from the NLU module with the actual utterance produced by the user. In this experiment, it has been chosen to restrict the output of the NLU module to the bag of word of the user utterances in order to be comparable the most recent approaches of state tracking like proposed in BIBREF5 that only use such information as evidence. One important interest in such approach is to dramatically simplify the process of state tracking by suppressing the NLU task. In fact, NLU is mainly formalized in current approaches as a supervised learning approach. The task of the dialog state tracker is to generate a set of possible states and their confidence scores for each slot, with the confidence score corresponding to the posterior probability of each variable state w.r.t the current estimation of the state and the current evidence. Finally, the dialog state tracker also maintains a special variable state, called None, which represents that a given variable composing the state has not been observed yet. For the rest of this section, we present experimental results of state tracking obtained in this dataset and we compare with state of the art generative and discriminative approaches. Experimental results As a comparison to the state of the art methods, Table 1 presents accuracy results of the best Collective Matrix Factorization model, with a latent space dimension of 350, which has been determined by cross-validation on a development set, where the value of each slot is instantiated as the most probable w.r.t the inference procedure presented in Section "Spectral decomposition model for state tracking in slot-filling dialogs" . In our experiments, the variance is estimated using standard dataset reshuffling. The same results are obtained for several state of the art methods of generative and discriminative state tracking on this dataset using the publicly available results as reported in BIBREF22 . More precisely, as provided by the state-of-the-art approaches, the accuracy scores computes $p(s^*_{t+1}|s_t,z_t)$ commonly name the joint goal. Our proposition is compared to the 4 baseline trackers provided by the DSTC organisers. They are the baseline tracker (Baseline), the focus tracker (Focus), the HWU tracker (HWU) and the HWU tracker with “original” flag set to (HWU+) respectively. Then a comparison to a maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model and finally a deep neural network (DNN) architecture proposed in BIBREF24 as reported also in BIBREF22 is presented. Related work As depicted in Section "Generative Dialog State Tracking" , the litterature of the domain can mainly decomposed into three family of approaches, rule-based, generative and discriminative. In previous works on this topics, BIBREF25 formally used particle filters to perform inference in a Bayesian network modeling of the dialog state, BIBREF26 presented a generative tracker and showed how to train an observation model from transcribed data, BIBREF27 grouped indistinguishable dialog states into partitions and consequently performed dialog state tracking on these partitions instead of the individual states, BIBREF11 used a dynamic Bayesian network to represent the dialog model in an approximate form. So, most attention in the dialog state belief tracking literature has been given to generative Bayesian network models until recently as proposed in BIBREF28 and BIBREF11 . On the other hand, the successful use of discriminative models for belief tracking has recently been reported by BIBREF29 and BIBREF5 and was a major theme in the results of the recent edition of the Dialog State Tracking Challenge. In this paper, a latent decomposition type of approach is proposed in order to address this general problem of dialog system. Our method gives encouraging results in comparison to the state of the art dataset and also does not required complex inference at test time because, as detailed in Section "Spectral decomposition model for state tracking in slot-filling dialogs" , the tracking algorithm hold a linear complexity w.r.t the sum of realization of each considered variables defining the state to track which is what we believe is one of the main advantage of this method. Secondly collective matrix factorization paradigm also for data fusion and bias by data type of modeling as successfully performed in matrix factorization based recommender systems BIBREF30 . Conclusion In this paper, a methodology and algorithm for efficient state tracking in the context of slot-filling dialogs has been presented. The proposed probabilistic model and inference algorithm allows efficient handling of dialog management in the context of classic dialog schemes that constitute a large part of task-oriented dialog tasks. More precisely, such a system allows efficient tracking of hidden variables defining the user goal using any kind of available evidence, from utterance bag-of-words to the output of a Natural Language Understanding module. Our current investigation on this subject are the beneficiary of distributional word representation as proposed in BIBREF31 to cope with the question of unknown words and unknown slots as suggested in BIBREF32 . In summary, the proposed approach differentiates itself by the following points from the prior art: (1) by producing a joint probability model of the hidden variable transition in a given dialog state and the observations that allow tracking the current beliefs about the user goals while explicitly considering potential interdependencies between state variables (2) by proposing the necessary computational framework, based on collective matrix factorization, to efficiently infer the distribution over the state variables in order to derive an adequate dialog policy of information seeking in this context. Finally, while transactional dialog tracking is mainly useful in the context of autonomous dialog management, the technology can also be used in dialog machine reading and knowledge extraction from human-to-human dialog corpora as proposed in the fourth edition of the Dialog State Tracking Challenge.
a deep neural network (DNN) architecture proposed in BIBREF24 , maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model
1d9b953a324fe0cfbe8e59dcff7a44a2f93c568d
1d9b953a324fe0cfbe8e59dcff7a44a2f93c568d_0
Q: Does API provide ability to connect to models written in some other deep learning framework? Text: Introduction Structured prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such linear programming relaxations and greedy search. Structured prediction has played a key role in the history of natural language processing. Example methods include techniques for sequence labeling and segmentation BIBREF0, BIBREF4, discriminative dependency and constituency parsing BIBREF10, BIBREF8, unsupervised learning for labeling and alignment BIBREF11, BIBREF12, approximate translation decoding with beam search BIBREF9, among many others. In recent years, research into deep structured prediction has studied how these approaches can be integrated with neural networks and pretrained models. One line of work has utilized structured prediction as the final final layer for deep models BIBREF13, BIBREF14. Another has incorporated structured prediction within deep learning models, exploring novel models for latent-structure learning, unsupervised learning, or model control BIBREF15, BIBREF16, BIBREF17. We aspire to make both of these use-cases as easy to use as standard neural networks. The practical challenge of employing structured prediction is that many required algorithms are difficult to implement efficiently and correctly. Most projects reimplement custom versions of standard algorithms or focus particularly on a single well-defined model class. This research style makes it difficult to combine and try out new approaches, a problem that has compounded with the complexity of research in deep structured prediction. With this challenge in mind, we introduce Torch-Struct with three specific contributions: Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework. Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python. Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization. In this system description, we first motivate the approach taken by the library, then present a technical description of the methods used, and finally present several example use cases. Related Work Several software libraries target structured prediction. Optimization tools, such as SVM-struct BIBREF18, focus on parameter estimation. Model libraries, such as CRFSuite BIBREF19 or CRF++ BIBREF20, implement inference for a fixed set of popular models, such as linear-chain CRFs. General-purpose inference libraries, such as PyStruct BIBREF21 or TurboParser BIBREF22, utilize external solvers for (primarily MAP) inference such as integer linear programming solvers and ADMM. Probabilistic programming languages, for example languages that integrate with deep learning such as Pyro BIBREF23, allow for specification and inference over some discrete domains. Most ambitiously, inference libraries such as Dyna BIBREF24 allow for declarative specifications of dynamic programming algorithms to support inference for generic algorithms. Torch-Struct takes a different approach and integrates a library of optimized structured distributions into a vectorized deep learning system. We begin by motivating this approach with a case study. Motivating Case Study While structured prediction is traditionally presented at the output layer, recent applications have deployed structured models broadly within neural networks BIBREF15, BIBREF25, BIBREF16. Torch-Struct aims to encourage this general use case. To illustrate, we consider a latent tree model. ListOps BIBREF26 is a dataset of mathematical functions. Each data point consists of a prefix expression $x$ and its result $y$, e.g. Models such as a flat RNN will fail to capture the hierarchical structure of this task. However, if a model can induce an explicit latent $z$, the parse tree of the expression, then the task is easy to learn by a tree-RNN model $p(y | x, z)$ BIBREF16, BIBREF27. A popular approach is a latent-tree RL model which we briefly summarize. The objective is to maximize the probability of the correct prediction under the expectation of a prior tree model, $p(z|x ;\phi )$, Computing the expectation is intractable so policy gradient is used. First a tree is sampled $\tilde{z} \sim p(z | x;\phi )$, then the gradient with respect to $\phi $ is approximated as, where $b$ is a variance reduction baseline. A common choice is the self-critical baseline BIBREF28, Finally an entropy regularization term is added to the objective encourage exploration of different trees, $ O + \lambda \mathbb {H}(p(z\ |\ x;\phi ))$. Even in this brief overview, we can see how complex a latent structured learning problem can be. To compute these terms, we need 5 different properties of the tree model $p(z\ | x; \phi )$: [description]font= [itemsep=-2pt] Policy gradient, $\tilde{z} \sim p(z \ |\ x ; \phi )$ Score policy samples, $p(z \ | \ x; \phi )$ Backpropagation, $\frac{\partial }{\partial \phi } p(z\ |\ x; \phi )$ Self-critical, $\arg \max _z p(z \ |\ x;\phi )$ Objective regularizer, $\mathbb {H}(p(z\ |\ x;\phi ))$ For structured models, each of these terms is non-trivial to compute. A goal of Torch-Struct is to make it seamless to deploy structured models for these complex settings. To demonstrate this, Torch-Struct includes an implementation of this latent-tree approach. With a minimal amount of user code, the implementation achieves near perfect accuracy on the ListOps dataset. Library Design The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\ell $, the user can request samples $z \sim \textsc {CRF}(\ell )$, probabilities $\textsc {CRF}(z;\ell )$, modes $\arg \max _z \textsc {CRF}(\ell )$, or other distributional properties such as $\mathbb {H}(\textsc {CRF}(\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning. Figure FIGREF11 demonstrates this API for a binary tree CRF over an ordered sequence, such as $p(z \ | \ y ;\phi )$ from the previous section. The distribution takes in log-potentials $\ell $ which score each possible span in the input. The distribution converts these to probabilities of a specific tree. This distribution can be queried for predicting over the set of trees, sampling a tree for model structure, or even computing entropy over all trees. Table TABREF2 shows all of the structures and distributions implemented in Torch-Struct. While each is internally implemented using different specialized algorithms and optimizations, from the user's perspective they all utilize the same external distributional API, and pass a generic set of distributional tests. This approach hides the internal complexity of the inference procedure, while giving the user full access to the model. Technical Approach ::: Conditional Random Fields We now describe the technical approach underlying the library. To establish notation first consider the implementation of a categorical distribution, Cat($\ell $), with one-hot categories $z$ with $z_i = 1$ from a set $\cal Z$ and probabilities given by the softmax, Define the log-partition as $A(\ell ) = \mathrm {LSE}(\ell )$, i.e. log of the denominator, where $\mathrm {LSE}$ is the log-sum-exp operator. Computing probabilities or sampling from this distribution, requires enumerating $\cal Z$ to compute the log-partition $A$. A useful identity is that derivatives of $A$ yield category probabilities, Other distributional properties can be similarly extracted from variants of the log-partition. For instance, define $A^*(\ell ) = \log \max _{j=1}^K \exp \ell _j$ then: $\mathbb {I}(z^*_i = 1) = \frac{\partial }{\partial \ell _i} A^*(\ell ) $. Conditional random fields, CRF($\ell $), extend the softmax to combinatorial spaces where ${\cal Z}$ is exponentially sized. Each $z$, is now represented as a binary vector over polynomial-sized set of parts, $\cal P$, i.e. ${\cal Z} \subset \lbrace 0, 1\rbrace ^{|\cal P|}$. Similarly log-potentials are now defined over parts $\ell \in \mathbb {R}^{|\cal P|}$. For instance, in Figure FIGREF11 each span is a part and the $\ell $ vector is shown in the top-left figure. Define the probability of a structure $z$ as, Computing probabilities or sampling from this distribution, requires computing the log-partition term $A$. In general computing this term is now intractable, however for many core algorithms in NLP there are exist efficient combinatorial algorithms for this term (as enumerated in Table TABREF2). Derivatives of the log-partition again provide distributional properties. For instance, the marginal probabilities of parts are given by, Similarly derivatives of $A^*$ correspond to whether a part appears in the argmax structure. $\mathbb {I}(z^*_p = 1) = \frac{\partial }{\partial \ell _p} A^*(\ell ) $. While these gradient identities are well-known BIBREF30, they are not commonly deployed. Computing CRF properties is typically done through two-step specialized algorithms, such as forward-backward, inside-outside, or similar variants such as viterbi-backpointers BIBREF31. In our experiments, we found that using these identities with auto-differentiation on GPU was often faster, and much simpler, than custom two-pass approaches. Torch-Struct is thus designed around using gradients for distributional computations. Technical Approach ::: Dynamic Programming and Semirings Torch-Struct is a collection of generic algorithms for CRF inference. Each CRF distribution object, $\textsc {CRF}(\ell )$, is constructed by providing $\ell \in \mathbb {R}^{|{\cal P}|}$ where the parts $\cal P$ are specific to the type of distribution. Internally, each distribution is implemented through a single Python function for computing the log-partition function $A(\ell )$. From this function, the library uses auto-differentiation and the identities from the previous section, to define a complete distribution object. The core models implemented by the library are shown in Table TABREF2. To make the approach concrete, we consider the example of a linear-chain CRF. latent](a)$z_1$; latent, right = of a](b)$z_2$; latent, right = of b](c)$z_3$; (a) – (b) – (c); The model has $C$ labels per node with a length $T=2$ edges utilizing a first-order linear-chain (Markov) model. This model has $2\times C \times C$ parts corresponding to edges in the chain, and thus requires $\ell \in \mathbb {R}^{2\times C \times C}$. The log-partition function $A(\ell )$ factors into two reduce computations, Computing this function left-to-right using dynamic programming yield the standard forward algorithm for sequence models. As we have seen, the gradient with respect to $\ell $ produces marginals for each part, i.e. the probability of a specific labeled edge. We can further extend the same function to support generic semiring dynamic programming BIBREF34. A semiring is defined by a pair $(\oplus , \otimes )$ with commutative $\oplus $, distribution, and appropriate identities. The log-partition utilizes $\oplus , \otimes = \mathrm {LSE}, +$, but we can substitute alternatives. For instance, utilizing the log-max semiring $(\max , +)$ in the forward algorithm yields the max score. As we have seen, its gradient with respect to $\ell $ is the argmax sequence, negating the need for a separate argmax (Viterbi) algorithm. Some distributional properties cannot be computed directly through gradient identities but still use a forward-backward style compute structure. For instance, sampling requires first computing the log-partition term and then sampling each part, (forward filtering / backward sampling). We can compute this value by overriding each backpropagation operation for the $\bigoplus $ to instead compute a sample. Table TABREF16 shows the set of semirings and backpropagation steps for computing different terms of interest. We note that many of the terms necessary in the case-study can be computed with variant semirings, negating the need for specialized algorithms. Optimizations Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms. Optimizations ::: a) Parallel Scan Inference The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \bigoplus _c \ell _{t, \cdot , c} \otimes \ell _{t^{\prime }, c, \cdot }$. Under this approach, we only need $O(\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models. Optimizations ::: b) Vectorized Parsing Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$, In order to vectorize this loop over $i, j$, we reindex the chart. Instead of using a single chart $C$, we split it into two parts: one right-facing $C_r[i, d] = C[i, i+d]$ and one left facing, $C_l[i+d, T-d] = C[i, i+d]$. After this reindexing, the update can be written. Unlike the original, this formula can easily be computed as a vectorized semiring dot product. This allows use to compute $C_r[\cdot , d]$ in one operation. Variants of this same approach can be used for all the parsing models employed. Optimizations ::: c) Semiring Matrix Operations The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\sum , \times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \times M$ and $M \times O$, we can broadcast with $\otimes $ to a tensor of size $N \times M \times O$ and then reduce dim $M$ by $\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing, where $q = \max _n T_{m,n} + U_{n, o}$. To optimize this operation on GPU we utilize the TVM language BIBREF36 to layout the CUDA loops and tune it to hardware. Conclusion and Future Work We present Torch-Struct, a library for deep structured prediction. The library achieves modularity through its adoption of a generic distributional API, completeness by utilizing CRFs and semirings to make it easy to add new algorithms, and efficiency through core optimizations to vectorize important dynamic programming steps. In addition to the problems discussed so far, Torch-Struct also includes several other example implementations including supervised dependency parsing with BERT, unsupervised tagging, structured attention, and connectionist temporal classification (CTC) for speech. The full library is available at https://github.com/harvardnlp/pytorch-struct. In the future, we hope to support research and production applications employing structured models. We also believe the library provides a strong foundation for building generic tools for interpretablity, control, and visualization through its probabilistic API. Finally, we hope to explore further optimizations to make core algorithms competitive with highly-optimized neural network components. Acknowledgements We thank Yoon Kim, Xiang Lisa Li, Sebastian Gehrmann, Yuntian Deng, and Justin Chiu for discussion and feedback on the project. The project was supported by NSF CAREER 1845664, NSF 1901030, and research awards by Sony and AWS.
Yes
093039f974805952636c19c12af3549aa422ec43
093039f974805952636c19c12af3549aa422ec43_0
Q: Is this library implemented into Torch or is framework agnostic? Text: Introduction Structured prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such linear programming relaxations and greedy search. Structured prediction has played a key role in the history of natural language processing. Example methods include techniques for sequence labeling and segmentation BIBREF0, BIBREF4, discriminative dependency and constituency parsing BIBREF10, BIBREF8, unsupervised learning for labeling and alignment BIBREF11, BIBREF12, approximate translation decoding with beam search BIBREF9, among many others. In recent years, research into deep structured prediction has studied how these approaches can be integrated with neural networks and pretrained models. One line of work has utilized structured prediction as the final final layer for deep models BIBREF13, BIBREF14. Another has incorporated structured prediction within deep learning models, exploring novel models for latent-structure learning, unsupervised learning, or model control BIBREF15, BIBREF16, BIBREF17. We aspire to make both of these use-cases as easy to use as standard neural networks. The practical challenge of employing structured prediction is that many required algorithms are difficult to implement efficiently and correctly. Most projects reimplement custom versions of standard algorithms or focus particularly on a single well-defined model class. This research style makes it difficult to combine and try out new approaches, a problem that has compounded with the complexity of research in deep structured prediction. With this challenge in mind, we introduce Torch-Struct with three specific contributions: Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework. Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python. Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization. In this system description, we first motivate the approach taken by the library, then present a technical description of the methods used, and finally present several example use cases. Related Work Several software libraries target structured prediction. Optimization tools, such as SVM-struct BIBREF18, focus on parameter estimation. Model libraries, such as CRFSuite BIBREF19 or CRF++ BIBREF20, implement inference for a fixed set of popular models, such as linear-chain CRFs. General-purpose inference libraries, such as PyStruct BIBREF21 or TurboParser BIBREF22, utilize external solvers for (primarily MAP) inference such as integer linear programming solvers and ADMM. Probabilistic programming languages, for example languages that integrate with deep learning such as Pyro BIBREF23, allow for specification and inference over some discrete domains. Most ambitiously, inference libraries such as Dyna BIBREF24 allow for declarative specifications of dynamic programming algorithms to support inference for generic algorithms. Torch-Struct takes a different approach and integrates a library of optimized structured distributions into a vectorized deep learning system. We begin by motivating this approach with a case study. Motivating Case Study While structured prediction is traditionally presented at the output layer, recent applications have deployed structured models broadly within neural networks BIBREF15, BIBREF25, BIBREF16. Torch-Struct aims to encourage this general use case. To illustrate, we consider a latent tree model. ListOps BIBREF26 is a dataset of mathematical functions. Each data point consists of a prefix expression $x$ and its result $y$, e.g. Models such as a flat RNN will fail to capture the hierarchical structure of this task. However, if a model can induce an explicit latent $z$, the parse tree of the expression, then the task is easy to learn by a tree-RNN model $p(y | x, z)$ BIBREF16, BIBREF27. A popular approach is a latent-tree RL model which we briefly summarize. The objective is to maximize the probability of the correct prediction under the expectation of a prior tree model, $p(z|x ;\phi )$, Computing the expectation is intractable so policy gradient is used. First a tree is sampled $\tilde{z} \sim p(z | x;\phi )$, then the gradient with respect to $\phi $ is approximated as, where $b$ is a variance reduction baseline. A common choice is the self-critical baseline BIBREF28, Finally an entropy regularization term is added to the objective encourage exploration of different trees, $ O + \lambda \mathbb {H}(p(z\ |\ x;\phi ))$. Even in this brief overview, we can see how complex a latent structured learning problem can be. To compute these terms, we need 5 different properties of the tree model $p(z\ | x; \phi )$: [description]font= [itemsep=-2pt] Policy gradient, $\tilde{z} \sim p(z \ |\ x ; \phi )$ Score policy samples, $p(z \ | \ x; \phi )$ Backpropagation, $\frac{\partial }{\partial \phi } p(z\ |\ x; \phi )$ Self-critical, $\arg \max _z p(z \ |\ x;\phi )$ Objective regularizer, $\mathbb {H}(p(z\ |\ x;\phi ))$ For structured models, each of these terms is non-trivial to compute. A goal of Torch-Struct is to make it seamless to deploy structured models for these complex settings. To demonstrate this, Torch-Struct includes an implementation of this latent-tree approach. With a minimal amount of user code, the implementation achieves near perfect accuracy on the ListOps dataset. Library Design The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\ell $, the user can request samples $z \sim \textsc {CRF}(\ell )$, probabilities $\textsc {CRF}(z;\ell )$, modes $\arg \max _z \textsc {CRF}(\ell )$, or other distributional properties such as $\mathbb {H}(\textsc {CRF}(\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning. Figure FIGREF11 demonstrates this API for a binary tree CRF over an ordered sequence, such as $p(z \ | \ y ;\phi )$ from the previous section. The distribution takes in log-potentials $\ell $ which score each possible span in the input. The distribution converts these to probabilities of a specific tree. This distribution can be queried for predicting over the set of trees, sampling a tree for model structure, or even computing entropy over all trees. Table TABREF2 shows all of the structures and distributions implemented in Torch-Struct. While each is internally implemented using different specialized algorithms and optimizations, from the user's perspective they all utilize the same external distributional API, and pass a generic set of distributional tests. This approach hides the internal complexity of the inference procedure, while giving the user full access to the model. Technical Approach ::: Conditional Random Fields We now describe the technical approach underlying the library. To establish notation first consider the implementation of a categorical distribution, Cat($\ell $), with one-hot categories $z$ with $z_i = 1$ from a set $\cal Z$ and probabilities given by the softmax, Define the log-partition as $A(\ell ) = \mathrm {LSE}(\ell )$, i.e. log of the denominator, where $\mathrm {LSE}$ is the log-sum-exp operator. Computing probabilities or sampling from this distribution, requires enumerating $\cal Z$ to compute the log-partition $A$. A useful identity is that derivatives of $A$ yield category probabilities, Other distributional properties can be similarly extracted from variants of the log-partition. For instance, define $A^*(\ell ) = \log \max _{j=1}^K \exp \ell _j$ then: $\mathbb {I}(z^*_i = 1) = \frac{\partial }{\partial \ell _i} A^*(\ell ) $. Conditional random fields, CRF($\ell $), extend the softmax to combinatorial spaces where ${\cal Z}$ is exponentially sized. Each $z$, is now represented as a binary vector over polynomial-sized set of parts, $\cal P$, i.e. ${\cal Z} \subset \lbrace 0, 1\rbrace ^{|\cal P|}$. Similarly log-potentials are now defined over parts $\ell \in \mathbb {R}^{|\cal P|}$. For instance, in Figure FIGREF11 each span is a part and the $\ell $ vector is shown in the top-left figure. Define the probability of a structure $z$ as, Computing probabilities or sampling from this distribution, requires computing the log-partition term $A$. In general computing this term is now intractable, however for many core algorithms in NLP there are exist efficient combinatorial algorithms for this term (as enumerated in Table TABREF2). Derivatives of the log-partition again provide distributional properties. For instance, the marginal probabilities of parts are given by, Similarly derivatives of $A^*$ correspond to whether a part appears in the argmax structure. $\mathbb {I}(z^*_p = 1) = \frac{\partial }{\partial \ell _p} A^*(\ell ) $. While these gradient identities are well-known BIBREF30, they are not commonly deployed. Computing CRF properties is typically done through two-step specialized algorithms, such as forward-backward, inside-outside, or similar variants such as viterbi-backpointers BIBREF31. In our experiments, we found that using these identities with auto-differentiation on GPU was often faster, and much simpler, than custom two-pass approaches. Torch-Struct is thus designed around using gradients for distributional computations. Technical Approach ::: Dynamic Programming and Semirings Torch-Struct is a collection of generic algorithms for CRF inference. Each CRF distribution object, $\textsc {CRF}(\ell )$, is constructed by providing $\ell \in \mathbb {R}^{|{\cal P}|}$ where the parts $\cal P$ are specific to the type of distribution. Internally, each distribution is implemented through a single Python function for computing the log-partition function $A(\ell )$. From this function, the library uses auto-differentiation and the identities from the previous section, to define a complete distribution object. The core models implemented by the library are shown in Table TABREF2. To make the approach concrete, we consider the example of a linear-chain CRF. latent](a)$z_1$; latent, right = of a](b)$z_2$; latent, right = of b](c)$z_3$; (a) – (b) – (c); The model has $C$ labels per node with a length $T=2$ edges utilizing a first-order linear-chain (Markov) model. This model has $2\times C \times C$ parts corresponding to edges in the chain, and thus requires $\ell \in \mathbb {R}^{2\times C \times C}$. The log-partition function $A(\ell )$ factors into two reduce computations, Computing this function left-to-right using dynamic programming yield the standard forward algorithm for sequence models. As we have seen, the gradient with respect to $\ell $ produces marginals for each part, i.e. the probability of a specific labeled edge. We can further extend the same function to support generic semiring dynamic programming BIBREF34. A semiring is defined by a pair $(\oplus , \otimes )$ with commutative $\oplus $, distribution, and appropriate identities. The log-partition utilizes $\oplus , \otimes = \mathrm {LSE}, +$, but we can substitute alternatives. For instance, utilizing the log-max semiring $(\max , +)$ in the forward algorithm yields the max score. As we have seen, its gradient with respect to $\ell $ is the argmax sequence, negating the need for a separate argmax (Viterbi) algorithm. Some distributional properties cannot be computed directly through gradient identities but still use a forward-backward style compute structure. For instance, sampling requires first computing the log-partition term and then sampling each part, (forward filtering / backward sampling). We can compute this value by overriding each backpropagation operation for the $\bigoplus $ to instead compute a sample. Table TABREF16 shows the set of semirings and backpropagation steps for computing different terms of interest. We note that many of the terms necessary in the case-study can be computed with variant semirings, negating the need for specialized algorithms. Optimizations Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms. Optimizations ::: a) Parallel Scan Inference The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \bigoplus _c \ell _{t, \cdot , c} \otimes \ell _{t^{\prime }, c, \cdot }$. Under this approach, we only need $O(\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models. Optimizations ::: b) Vectorized Parsing Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$, In order to vectorize this loop over $i, j$, we reindex the chart. Instead of using a single chart $C$, we split it into two parts: one right-facing $C_r[i, d] = C[i, i+d]$ and one left facing, $C_l[i+d, T-d] = C[i, i+d]$. After this reindexing, the update can be written. Unlike the original, this formula can easily be computed as a vectorized semiring dot product. This allows use to compute $C_r[\cdot , d]$ in one operation. Variants of this same approach can be used for all the parsing models employed. Optimizations ::: c) Semiring Matrix Operations The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\sum , \times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \times M$ and $M \times O$, we can broadcast with $\otimes $ to a tensor of size $N \times M \times O$ and then reduce dim $M$ by $\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing, where $q = \max _n T_{m,n} + U_{n, o}$. To optimize this operation on GPU we utilize the TVM language BIBREF36 to layout the CUDA loops and tune it to hardware. Conclusion and Future Work We present Torch-Struct, a library for deep structured prediction. The library achieves modularity through its adoption of a generic distributional API, completeness by utilizing CRFs and semirings to make it easy to add new algorithms, and efficiency through core optimizations to vectorize important dynamic programming steps. In addition to the problems discussed so far, Torch-Struct also includes several other example implementations including supervised dependency parsing with BERT, unsupervised tagging, structured attention, and connectionist temporal classification (CTC) for speech. The full library is available at https://github.com/harvardnlp/pytorch-struct. In the future, we hope to support research and production applications employing structured models. We also believe the library provides a strong foundation for building generic tools for interpretablity, control, and visualization through its probabilistic API. Finally, we hope to explore further optimizations to make core algorithms competitive with highly-optimized neural network components. Acknowledgements We thank Yoon Kim, Xiang Lisa Li, Sebastian Gehrmann, Yuntian Deng, and Justin Chiu for discussion and feedback on the project. The project was supported by NSF CAREER 1845664, NSF 1901030, and research awards by Sony and AWS.
It uses deep learning framework (pytorch)
8df89988adff57279db10992846728ec4f500eaa
8df89988adff57279db10992846728ec4f500eaa_0
Q: What baselines are used in experiments? Text: Introduction Structured prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such linear programming relaxations and greedy search. Structured prediction has played a key role in the history of natural language processing. Example methods include techniques for sequence labeling and segmentation BIBREF0, BIBREF4, discriminative dependency and constituency parsing BIBREF10, BIBREF8, unsupervised learning for labeling and alignment BIBREF11, BIBREF12, approximate translation decoding with beam search BIBREF9, among many others. In recent years, research into deep structured prediction has studied how these approaches can be integrated with neural networks and pretrained models. One line of work has utilized structured prediction as the final final layer for deep models BIBREF13, BIBREF14. Another has incorporated structured prediction within deep learning models, exploring novel models for latent-structure learning, unsupervised learning, or model control BIBREF15, BIBREF16, BIBREF17. We aspire to make both of these use-cases as easy to use as standard neural networks. The practical challenge of employing structured prediction is that many required algorithms are difficult to implement efficiently and correctly. Most projects reimplement custom versions of standard algorithms or focus particularly on a single well-defined model class. This research style makes it difficult to combine and try out new approaches, a problem that has compounded with the complexity of research in deep structured prediction. With this challenge in mind, we introduce Torch-Struct with three specific contributions: Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework. Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python. Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization. In this system description, we first motivate the approach taken by the library, then present a technical description of the methods used, and finally present several example use cases. Related Work Several software libraries target structured prediction. Optimization tools, such as SVM-struct BIBREF18, focus on parameter estimation. Model libraries, such as CRFSuite BIBREF19 or CRF++ BIBREF20, implement inference for a fixed set of popular models, such as linear-chain CRFs. General-purpose inference libraries, such as PyStruct BIBREF21 or TurboParser BIBREF22, utilize external solvers for (primarily MAP) inference such as integer linear programming solvers and ADMM. Probabilistic programming languages, for example languages that integrate with deep learning such as Pyro BIBREF23, allow for specification and inference over some discrete domains. Most ambitiously, inference libraries such as Dyna BIBREF24 allow for declarative specifications of dynamic programming algorithms to support inference for generic algorithms. Torch-Struct takes a different approach and integrates a library of optimized structured distributions into a vectorized deep learning system. We begin by motivating this approach with a case study. Motivating Case Study While structured prediction is traditionally presented at the output layer, recent applications have deployed structured models broadly within neural networks BIBREF15, BIBREF25, BIBREF16. Torch-Struct aims to encourage this general use case. To illustrate, we consider a latent tree model. ListOps BIBREF26 is a dataset of mathematical functions. Each data point consists of a prefix expression $x$ and its result $y$, e.g. Models such as a flat RNN will fail to capture the hierarchical structure of this task. However, if a model can induce an explicit latent $z$, the parse tree of the expression, then the task is easy to learn by a tree-RNN model $p(y | x, z)$ BIBREF16, BIBREF27. A popular approach is a latent-tree RL model which we briefly summarize. The objective is to maximize the probability of the correct prediction under the expectation of a prior tree model, $p(z|x ;\phi )$, Computing the expectation is intractable so policy gradient is used. First a tree is sampled $\tilde{z} \sim p(z | x;\phi )$, then the gradient with respect to $\phi $ is approximated as, where $b$ is a variance reduction baseline. A common choice is the self-critical baseline BIBREF28, Finally an entropy regularization term is added to the objective encourage exploration of different trees, $ O + \lambda \mathbb {H}(p(z\ |\ x;\phi ))$. Even in this brief overview, we can see how complex a latent structured learning problem can be. To compute these terms, we need 5 different properties of the tree model $p(z\ | x; \phi )$: [description]font= [itemsep=-2pt] Policy gradient, $\tilde{z} \sim p(z \ |\ x ; \phi )$ Score policy samples, $p(z \ | \ x; \phi )$ Backpropagation, $\frac{\partial }{\partial \phi } p(z\ |\ x; \phi )$ Self-critical, $\arg \max _z p(z \ |\ x;\phi )$ Objective regularizer, $\mathbb {H}(p(z\ |\ x;\phi ))$ For structured models, each of these terms is non-trivial to compute. A goal of Torch-Struct is to make it seamless to deploy structured models for these complex settings. To demonstrate this, Torch-Struct includes an implementation of this latent-tree approach. With a minimal amount of user code, the implementation achieves near perfect accuracy on the ListOps dataset. Library Design The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\ell $, the user can request samples $z \sim \textsc {CRF}(\ell )$, probabilities $\textsc {CRF}(z;\ell )$, modes $\arg \max _z \textsc {CRF}(\ell )$, or other distributional properties such as $\mathbb {H}(\textsc {CRF}(\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning. Figure FIGREF11 demonstrates this API for a binary tree CRF over an ordered sequence, such as $p(z \ | \ y ;\phi )$ from the previous section. The distribution takes in log-potentials $\ell $ which score each possible span in the input. The distribution converts these to probabilities of a specific tree. This distribution can be queried for predicting over the set of trees, sampling a tree for model structure, or even computing entropy over all trees. Table TABREF2 shows all of the structures and distributions implemented in Torch-Struct. While each is internally implemented using different specialized algorithms and optimizations, from the user's perspective they all utilize the same external distributional API, and pass a generic set of distributional tests. This approach hides the internal complexity of the inference procedure, while giving the user full access to the model. Technical Approach ::: Conditional Random Fields We now describe the technical approach underlying the library. To establish notation first consider the implementation of a categorical distribution, Cat($\ell $), with one-hot categories $z$ with $z_i = 1$ from a set $\cal Z$ and probabilities given by the softmax, Define the log-partition as $A(\ell ) = \mathrm {LSE}(\ell )$, i.e. log of the denominator, where $\mathrm {LSE}$ is the log-sum-exp operator. Computing probabilities or sampling from this distribution, requires enumerating $\cal Z$ to compute the log-partition $A$. A useful identity is that derivatives of $A$ yield category probabilities, Other distributional properties can be similarly extracted from variants of the log-partition. For instance, define $A^*(\ell ) = \log \max _{j=1}^K \exp \ell _j$ then: $\mathbb {I}(z^*_i = 1) = \frac{\partial }{\partial \ell _i} A^*(\ell ) $. Conditional random fields, CRF($\ell $), extend the softmax to combinatorial spaces where ${\cal Z}$ is exponentially sized. Each $z$, is now represented as a binary vector over polynomial-sized set of parts, $\cal P$, i.e. ${\cal Z} \subset \lbrace 0, 1\rbrace ^{|\cal P|}$. Similarly log-potentials are now defined over parts $\ell \in \mathbb {R}^{|\cal P|}$. For instance, in Figure FIGREF11 each span is a part and the $\ell $ vector is shown in the top-left figure. Define the probability of a structure $z$ as, Computing probabilities or sampling from this distribution, requires computing the log-partition term $A$. In general computing this term is now intractable, however for many core algorithms in NLP there are exist efficient combinatorial algorithms for this term (as enumerated in Table TABREF2). Derivatives of the log-partition again provide distributional properties. For instance, the marginal probabilities of parts are given by, Similarly derivatives of $A^*$ correspond to whether a part appears in the argmax structure. $\mathbb {I}(z^*_p = 1) = \frac{\partial }{\partial \ell _p} A^*(\ell ) $. While these gradient identities are well-known BIBREF30, they are not commonly deployed. Computing CRF properties is typically done through two-step specialized algorithms, such as forward-backward, inside-outside, or similar variants such as viterbi-backpointers BIBREF31. In our experiments, we found that using these identities with auto-differentiation on GPU was often faster, and much simpler, than custom two-pass approaches. Torch-Struct is thus designed around using gradients for distributional computations. Technical Approach ::: Dynamic Programming and Semirings Torch-Struct is a collection of generic algorithms for CRF inference. Each CRF distribution object, $\textsc {CRF}(\ell )$, is constructed by providing $\ell \in \mathbb {R}^{|{\cal P}|}$ where the parts $\cal P$ are specific to the type of distribution. Internally, each distribution is implemented through a single Python function for computing the log-partition function $A(\ell )$. From this function, the library uses auto-differentiation and the identities from the previous section, to define a complete distribution object. The core models implemented by the library are shown in Table TABREF2. To make the approach concrete, we consider the example of a linear-chain CRF. latent](a)$z_1$; latent, right = of a](b)$z_2$; latent, right = of b](c)$z_3$; (a) – (b) – (c); The model has $C$ labels per node with a length $T=2$ edges utilizing a first-order linear-chain (Markov) model. This model has $2\times C \times C$ parts corresponding to edges in the chain, and thus requires $\ell \in \mathbb {R}^{2\times C \times C}$. The log-partition function $A(\ell )$ factors into two reduce computations, Computing this function left-to-right using dynamic programming yield the standard forward algorithm for sequence models. As we have seen, the gradient with respect to $\ell $ produces marginals for each part, i.e. the probability of a specific labeled edge. We can further extend the same function to support generic semiring dynamic programming BIBREF34. A semiring is defined by a pair $(\oplus , \otimes )$ with commutative $\oplus $, distribution, and appropriate identities. The log-partition utilizes $\oplus , \otimes = \mathrm {LSE}, +$, but we can substitute alternatives. For instance, utilizing the log-max semiring $(\max , +)$ in the forward algorithm yields the max score. As we have seen, its gradient with respect to $\ell $ is the argmax sequence, negating the need for a separate argmax (Viterbi) algorithm. Some distributional properties cannot be computed directly through gradient identities but still use a forward-backward style compute structure. For instance, sampling requires first computing the log-partition term and then sampling each part, (forward filtering / backward sampling). We can compute this value by overriding each backpropagation operation for the $\bigoplus $ to instead compute a sample. Table TABREF16 shows the set of semirings and backpropagation steps for computing different terms of interest. We note that many of the terms necessary in the case-study can be computed with variant semirings, negating the need for specialized algorithms. Optimizations Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms. Optimizations ::: a) Parallel Scan Inference The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \bigoplus _c \ell _{t, \cdot , c} \otimes \ell _{t^{\prime }, c, \cdot }$. Under this approach, we only need $O(\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models. Optimizations ::: b) Vectorized Parsing Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$, In order to vectorize this loop over $i, j$, we reindex the chart. Instead of using a single chart $C$, we split it into two parts: one right-facing $C_r[i, d] = C[i, i+d]$ and one left facing, $C_l[i+d, T-d] = C[i, i+d]$. After this reindexing, the update can be written. Unlike the original, this formula can easily be computed as a vectorized semiring dot product. This allows use to compute $C_r[\cdot , d]$ in one operation. Variants of this same approach can be used for all the parsing models employed. Optimizations ::: c) Semiring Matrix Operations The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\sum , \times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \times M$ and $M \times O$, we can broadcast with $\otimes $ to a tensor of size $N \times M \times O$ and then reduce dim $M$ by $\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing, where $q = \max _n T_{m,n} + U_{n, o}$. To optimize this operation on GPU we utilize the TVM language BIBREF36 to layout the CUDA loops and tune it to hardware. Conclusion and Future Work We present Torch-Struct, a library for deep structured prediction. The library achieves modularity through its adoption of a generic distributional API, completeness by utilizing CRFs and semirings to make it easy to add new algorithms, and efficiency through core optimizations to vectorize important dynamic programming steps. In addition to the problems discussed so far, Torch-Struct also includes several other example implementations including supervised dependency parsing with BERT, unsupervised tagging, structured attention, and connectionist temporal classification (CTC) for speech. The full library is available at https://github.com/harvardnlp/pytorch-struct. In the future, we hope to support research and production applications employing structured models. We also believe the library provides a strong foundation for building generic tools for interpretablity, control, and visualization through its probabilistic API. Finally, we hope to explore further optimizations to make core algorithms competitive with highly-optimized neural network components. Acknowledgements We thank Yoon Kim, Xiang Lisa Li, Sebastian Gehrmann, Yuntian Deng, and Justin Chiu for discussion and feedback on the project. The project was supported by NSF CAREER 1845664, NSF 1901030, and research awards by Sony and AWS.
Typical implementations of dynamic programming algorithms are serial in the length of the sequence, Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized, Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient
94edac71eea1e78add678fb5ed2d08526b51016b
94edac71eea1e78add678fb5ed2d08526b51016b_0
Q: What general-purpose optimizations are included? Text: Introduction Structured prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such linear programming relaxations and greedy search. Structured prediction has played a key role in the history of natural language processing. Example methods include techniques for sequence labeling and segmentation BIBREF0, BIBREF4, discriminative dependency and constituency parsing BIBREF10, BIBREF8, unsupervised learning for labeling and alignment BIBREF11, BIBREF12, approximate translation decoding with beam search BIBREF9, among many others. In recent years, research into deep structured prediction has studied how these approaches can be integrated with neural networks and pretrained models. One line of work has utilized structured prediction as the final final layer for deep models BIBREF13, BIBREF14. Another has incorporated structured prediction within deep learning models, exploring novel models for latent-structure learning, unsupervised learning, or model control BIBREF15, BIBREF16, BIBREF17. We aspire to make both of these use-cases as easy to use as standard neural networks. The practical challenge of employing structured prediction is that many required algorithms are difficult to implement efficiently and correctly. Most projects reimplement custom versions of standard algorithms or focus particularly on a single well-defined model class. This research style makes it difficult to combine and try out new approaches, a problem that has compounded with the complexity of research in deep structured prediction. With this challenge in mind, we introduce Torch-Struct with three specific contributions: Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework. Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python. Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization. In this system description, we first motivate the approach taken by the library, then present a technical description of the methods used, and finally present several example use cases. Related Work Several software libraries target structured prediction. Optimization tools, such as SVM-struct BIBREF18, focus on parameter estimation. Model libraries, such as CRFSuite BIBREF19 or CRF++ BIBREF20, implement inference for a fixed set of popular models, such as linear-chain CRFs. General-purpose inference libraries, such as PyStruct BIBREF21 or TurboParser BIBREF22, utilize external solvers for (primarily MAP) inference such as integer linear programming solvers and ADMM. Probabilistic programming languages, for example languages that integrate with deep learning such as Pyro BIBREF23, allow for specification and inference over some discrete domains. Most ambitiously, inference libraries such as Dyna BIBREF24 allow for declarative specifications of dynamic programming algorithms to support inference for generic algorithms. Torch-Struct takes a different approach and integrates a library of optimized structured distributions into a vectorized deep learning system. We begin by motivating this approach with a case study. Motivating Case Study While structured prediction is traditionally presented at the output layer, recent applications have deployed structured models broadly within neural networks BIBREF15, BIBREF25, BIBREF16. Torch-Struct aims to encourage this general use case. To illustrate, we consider a latent tree model. ListOps BIBREF26 is a dataset of mathematical functions. Each data point consists of a prefix expression $x$ and its result $y$, e.g. Models such as a flat RNN will fail to capture the hierarchical structure of this task. However, if a model can induce an explicit latent $z$, the parse tree of the expression, then the task is easy to learn by a tree-RNN model $p(y | x, z)$ BIBREF16, BIBREF27. A popular approach is a latent-tree RL model which we briefly summarize. The objective is to maximize the probability of the correct prediction under the expectation of a prior tree model, $p(z|x ;\phi )$, Computing the expectation is intractable so policy gradient is used. First a tree is sampled $\tilde{z} \sim p(z | x;\phi )$, then the gradient with respect to $\phi $ is approximated as, where $b$ is a variance reduction baseline. A common choice is the self-critical baseline BIBREF28, Finally an entropy regularization term is added to the objective encourage exploration of different trees, $ O + \lambda \mathbb {H}(p(z\ |\ x;\phi ))$. Even in this brief overview, we can see how complex a latent structured learning problem can be. To compute these terms, we need 5 different properties of the tree model $p(z\ | x; \phi )$: [description]font= [itemsep=-2pt] Policy gradient, $\tilde{z} \sim p(z \ |\ x ; \phi )$ Score policy samples, $p(z \ | \ x; \phi )$ Backpropagation, $\frac{\partial }{\partial \phi } p(z\ |\ x; \phi )$ Self-critical, $\arg \max _z p(z \ |\ x;\phi )$ Objective regularizer, $\mathbb {H}(p(z\ |\ x;\phi ))$ For structured models, each of these terms is non-trivial to compute. A goal of Torch-Struct is to make it seamless to deploy structured models for these complex settings. To demonstrate this, Torch-Struct includes an implementation of this latent-tree approach. With a minimal amount of user code, the implementation achieves near perfect accuracy on the ListOps dataset. Library Design The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\ell $, the user can request samples $z \sim \textsc {CRF}(\ell )$, probabilities $\textsc {CRF}(z;\ell )$, modes $\arg \max _z \textsc {CRF}(\ell )$, or other distributional properties such as $\mathbb {H}(\textsc {CRF}(\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning. Figure FIGREF11 demonstrates this API for a binary tree CRF over an ordered sequence, such as $p(z \ | \ y ;\phi )$ from the previous section. The distribution takes in log-potentials $\ell $ which score each possible span in the input. The distribution converts these to probabilities of a specific tree. This distribution can be queried for predicting over the set of trees, sampling a tree for model structure, or even computing entropy over all trees. Table TABREF2 shows all of the structures and distributions implemented in Torch-Struct. While each is internally implemented using different specialized algorithms and optimizations, from the user's perspective they all utilize the same external distributional API, and pass a generic set of distributional tests. This approach hides the internal complexity of the inference procedure, while giving the user full access to the model. Technical Approach ::: Conditional Random Fields We now describe the technical approach underlying the library. To establish notation first consider the implementation of a categorical distribution, Cat($\ell $), with one-hot categories $z$ with $z_i = 1$ from a set $\cal Z$ and probabilities given by the softmax, Define the log-partition as $A(\ell ) = \mathrm {LSE}(\ell )$, i.e. log of the denominator, where $\mathrm {LSE}$ is the log-sum-exp operator. Computing probabilities or sampling from this distribution, requires enumerating $\cal Z$ to compute the log-partition $A$. A useful identity is that derivatives of $A$ yield category probabilities, Other distributional properties can be similarly extracted from variants of the log-partition. For instance, define $A^*(\ell ) = \log \max _{j=1}^K \exp \ell _j$ then: $\mathbb {I}(z^*_i = 1) = \frac{\partial }{\partial \ell _i} A^*(\ell ) $. Conditional random fields, CRF($\ell $), extend the softmax to combinatorial spaces where ${\cal Z}$ is exponentially sized. Each $z$, is now represented as a binary vector over polynomial-sized set of parts, $\cal P$, i.e. ${\cal Z} \subset \lbrace 0, 1\rbrace ^{|\cal P|}$. Similarly log-potentials are now defined over parts $\ell \in \mathbb {R}^{|\cal P|}$. For instance, in Figure FIGREF11 each span is a part and the $\ell $ vector is shown in the top-left figure. Define the probability of a structure $z$ as, Computing probabilities or sampling from this distribution, requires computing the log-partition term $A$. In general computing this term is now intractable, however for many core algorithms in NLP there are exist efficient combinatorial algorithms for this term (as enumerated in Table TABREF2). Derivatives of the log-partition again provide distributional properties. For instance, the marginal probabilities of parts are given by, Similarly derivatives of $A^*$ correspond to whether a part appears in the argmax structure. $\mathbb {I}(z^*_p = 1) = \frac{\partial }{\partial \ell _p} A^*(\ell ) $. While these gradient identities are well-known BIBREF30, they are not commonly deployed. Computing CRF properties is typically done through two-step specialized algorithms, such as forward-backward, inside-outside, or similar variants such as viterbi-backpointers BIBREF31. In our experiments, we found that using these identities with auto-differentiation on GPU was often faster, and much simpler, than custom two-pass approaches. Torch-Struct is thus designed around using gradients for distributional computations. Technical Approach ::: Dynamic Programming and Semirings Torch-Struct is a collection of generic algorithms for CRF inference. Each CRF distribution object, $\textsc {CRF}(\ell )$, is constructed by providing $\ell \in \mathbb {R}^{|{\cal P}|}$ where the parts $\cal P$ are specific to the type of distribution. Internally, each distribution is implemented through a single Python function for computing the log-partition function $A(\ell )$. From this function, the library uses auto-differentiation and the identities from the previous section, to define a complete distribution object. The core models implemented by the library are shown in Table TABREF2. To make the approach concrete, we consider the example of a linear-chain CRF. latent](a)$z_1$; latent, right = of a](b)$z_2$; latent, right = of b](c)$z_3$; (a) – (b) – (c); The model has $C$ labels per node with a length $T=2$ edges utilizing a first-order linear-chain (Markov) model. This model has $2\times C \times C$ parts corresponding to edges in the chain, and thus requires $\ell \in \mathbb {R}^{2\times C \times C}$. The log-partition function $A(\ell )$ factors into two reduce computations, Computing this function left-to-right using dynamic programming yield the standard forward algorithm for sequence models. As we have seen, the gradient with respect to $\ell $ produces marginals for each part, i.e. the probability of a specific labeled edge. We can further extend the same function to support generic semiring dynamic programming BIBREF34. A semiring is defined by a pair $(\oplus , \otimes )$ with commutative $\oplus $, distribution, and appropriate identities. The log-partition utilizes $\oplus , \otimes = \mathrm {LSE}, +$, but we can substitute alternatives. For instance, utilizing the log-max semiring $(\max , +)$ in the forward algorithm yields the max score. As we have seen, its gradient with respect to $\ell $ is the argmax sequence, negating the need for a separate argmax (Viterbi) algorithm. Some distributional properties cannot be computed directly through gradient identities but still use a forward-backward style compute structure. For instance, sampling requires first computing the log-partition term and then sampling each part, (forward filtering / backward sampling). We can compute this value by overriding each backpropagation operation for the $\bigoplus $ to instead compute a sample. Table TABREF16 shows the set of semirings and backpropagation steps for computing different terms of interest. We note that many of the terms necessary in the case-study can be computed with variant semirings, negating the need for specialized algorithms. Optimizations Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms. Optimizations ::: a) Parallel Scan Inference The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \bigoplus _c \ell _{t, \cdot , c} \otimes \ell _{t^{\prime }, c, \cdot }$. Under this approach, we only need $O(\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models. Optimizations ::: b) Vectorized Parsing Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$, In order to vectorize this loop over $i, j$, we reindex the chart. Instead of using a single chart $C$, we split it into two parts: one right-facing $C_r[i, d] = C[i, i+d]$ and one left facing, $C_l[i+d, T-d] = C[i, i+d]$. After this reindexing, the update can be written. Unlike the original, this formula can easily be computed as a vectorized semiring dot product. This allows use to compute $C_r[\cdot , d]$ in one operation. Variants of this same approach can be used for all the parsing models employed. Optimizations ::: c) Semiring Matrix Operations The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\sum , \times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \times M$ and $M \times O$, we can broadcast with $\otimes $ to a tensor of size $N \times M \times O$ and then reduce dim $M$ by $\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing, where $q = \max _n T_{m,n} + U_{n, o}$. To optimize this operation on GPU we utilize the TVM language BIBREF36 to layout the CUDA loops and tune it to hardware. Conclusion and Future Work We present Torch-Struct, a library for deep structured prediction. The library achieves modularity through its adoption of a generic distributional API, completeness by utilizing CRFs and semirings to make it easy to add new algorithms, and efficiency through core optimizations to vectorize important dynamic programming steps. In addition to the problems discussed so far, Torch-Struct also includes several other example implementations including supervised dependency parsing with BERT, unsupervised tagging, structured attention, and connectionist temporal classification (CTC) for speech. The full library is available at https://github.com/harvardnlp/pytorch-struct. In the future, we hope to support research and production applications employing structured models. We also believe the library provides a strong foundation for building generic tools for interpretablity, control, and visualization through its probabilistic API. Finally, we hope to explore further optimizations to make core algorithms competitive with highly-optimized neural network components. Acknowledgements We thank Yoon Kim, Xiang Lisa Li, Sebastian Gehrmann, Yuntian Deng, and Justin Chiu for discussion and feedback on the project. The project was supported by NSF CAREER 1845664, NSF 1901030, and research awards by Sony and AWS.
Parallel Scan Inference, Vectorized Parsing, Semiring Matrix Operations
9c4ed8ca59ba6d240f031393b01f634a9dc3615d
9c4ed8ca59ba6d240f031393b01f634a9dc3615d_0
Q: what baseline do they compare to? Text: Targeted Sentiment Classification Opinions are everywhere in our lives. Every time we open a book, read the newspaper, or look at social media, we scan for opinions or form them ourselves. We are cued to the opinions of others, and often use this information to update our own opinions Asch1955,Das2014. This is true on the Internet as much as it is in our face-to-face relationships. In fact, with its wealth of opinionated material available online, it has become feasible and interesting to harness this data in order to automatically identify opinions, which had previously been far more expensive and tedious when the only access to data was offline. Sentiment analysis, sometimes referred to as opinion mining, seeks to create data-driven methods to classify the polarity of a text. The information obtained from sentiment classifiers can then be used for tracking user opinions in different domains Pang2002,Socher2013b,Nakov2013, predicting the outcome of political elections wang2012demo,bakliwal2013, detecting hate speech online Nahar2012,hartung-EtAl:2017:WASSA2017, as well as predicting changes in the stock market Pogolu2016. Sentiment analysis can be modeled as a classification task, especially at sentence- and document-level, or as a sequence-labeling task at target-level. Targeted sentiment analysis aims at predicting the polarity expressed towards a particular entity or sub-aspect of that entity. This is a more realistic view of sentiment, as polarities are directed towards targets, not spread uniformly across sentences or documents. Take the following example, where we mark the sentiment target with green, positive sentiment expressions with blue, and negative sentiment expressions with red.: The café near my house has great coffee but I never go there because the service is terrible. In this sentence, it is not stated what the sentiment towards the target “café” is, while the sentiment of the target “coffee” is positive and that of “service” is negative. In order to correctly classify the sentiment of each target, it is necessary to (1) detect the targets, (2) detect polarity expressions, and (3) resolve the relations between these. In order to model these relationships and test the accuracy of the learned models, hand-annotated resources are typically used for training machine learning algorithms. Resource-rich languages, e. g., English, have high-quality annotated data for both classification and sequence-labeling tasks, as well as for a variety of domains. However, under-resourced languages either completely lack annotated data or have only a few resources for specific domains or sentiment tasks. For instance, for aspect-level sentiment analysis, English has datasets available in the news domain Wiebe2005, product review domain HuandLiu2004,Ding2008,Pontiki2014,Pontiki2015, education domain Welch2016, medical domain Grasser2018, urban neighborhood domain Saeidi2016, and financial Maia2018 domain. Spanish, on the other hand, has only three datasets Agerri2013,Pontiki2016, while Basque and Catalan only have one each for a single domain Barnes2018a. The cost of annotating data can often be prohibitive as training native-speakers to annotate fine-grained sentiment is a long process. This motivates the need to develop sentiment analysis methods capable of leveraging data annotated in other languages. Cross-Lingual Approaches to Sentiment Analysis Previous work on cross-lingual sentiment analysis (CLSA) offers a way to perform sentiment analysis in an under-resourced language that does not have any annotated data available. Most methods relied on the availability of large amounts of parallel data to transfer sentiment information across languages. Machine translation (MT), for example, has been the most common approach to cross-lingual sentiment analysis Banea2013,Almeida2015,Zhang2017. Machine translation, however, can be biased towards domains Hua2008,Bertoldi2009,Koehn2017, does not always preserve sentiment Mohammad2016, and requires millions of parallel sentences Gavrila2011,Vaswani2017, which places a limit on which languages can benefit from these approaches. The following example illustrates that MT does not preserve sentiment (hotel review in Basque, automatically translated via translate.google.com): Hotel $^{1}$ txukuna da, nahiko berria. Harreran zeuden langileen arreta $^{2}$ ez zen onena izan. Tren geltoki bat $^{3}$ du 5 minutura eta kotxez $^{4}$ berehala iristen da baina oinez $^{5}$ urruti samar dago. The hotel $^{1}$ is tidy, quite new. The care of the workers at reception $^{2}$ was not the best. It's 5 minutes away from a train station $^{3}$ and it's quick to reach the car $^{4}$ , but it's a short distance away. While the first two sentences are mostly well translated for the purposes of sentiment analysis, in the third, there are a number of reformulations and deletions that lead to a loss of information. It should read “It has a train station five minutes away and by car you can reach it quickly, but by foot it's quite a distance.” We can see that one of the targets has been deleted and the sentiment has flipped from negative to positive. Such common problems degrade the results of cross-lingual sentiment systems that use MT, especially at target-level. Although high quality machine translation systems exist between many languages and have been shown to enable cross-lingual sentiment analysis, for the vast majority of language pairs in the world there is not enough parallel data to create these high quality MT systems. This lack of parallel data coupled with the computational expense of MT means that approaches to cross-lingual sentiment analysis that do not require MT should be preferred. Additionally, most cross-lingual sentiment approaches using MT have concentrated on sentence- and document-level, and have not explored targeted or aspect-level sentiment tasks. Bilingual Distributional Models and the Contributions of this Paper Recently, several bilingual distributional semantics models (bilingual embeddings) have been proposed and provide a useful framework for cross-lingual research without requiring machine translation. They are effective at generating features for bilingual dictionary induction Mikolov2013translation,Artetxe2016,Lample2017, cross-lingual text classification Prettenhofer2011b,Chandar2014, or cross-lingual dependency parsing Sogaard2015, among others. In this framework, words are represented as $n$ -dimensional vectors which are created on large monolingual corpora in order to (1) maximize the similarity of words that appear in similar contexts and use some bilingual regularization in order to (2) maximize the similarity of translation pairs. In this work, we concentrate on a subset of these bilingual embedding methods that perform a post-hoc mapping to a bilingual space, which we refer to as embedding projection methods. One of the main advantages of these methods is that they make better use of small amounts of parallel data than MT systems, even enabling unsupervised machine translation Artetxe2018,Lample2018. With this paper, we provide the first extensive evaluation of cross-lingual embeddings for targeted sentiment tasks. We formulate the task of targeted sentiment analysis as classification, given the targets from an oracle. The question we attempt to address is how to infer the polarity of a sentiment target in a language that does not have any annotated sentiment data or parallel corpora with a resource-rich language. In the following Catalan sentence, for example, how can we determine that the sentiment of “servei” is negative, while that of “menjar” is positive if we do not have annotated data in Catalan or parallel data for English-Catalan? El servei al restaurant va ser péssim. Al menys el menjar era bo. Specifically, we propose an approach which requires (1) minimal bilingual data and instead makes use of (2) high-quality monolingual word embeddings in the source and target language. We take an intermediate step by first testing this approach on sentence-level classification. After confirming that our approach performs well at sentence-level, we propose a targeted model with the same data requirements. The main contributions are that we compare projection-based cross-lingual methods to MT, extend previous cross-lingual approaches to enable targeted cross-lingual sentiment analysis with minimal parallel data requirements, compare different model architectures for cross-lingual targeted sentiment analysis, perform a detailed error analysis, and detailing the advantages and disadvantages of each method, and, finally, deploy the methods in a realistic case-study to analyze their suitability beyond applications on (naturally) limited language pairs. In addition, we make our code and data publicly available at https://github.com/jbarnesspain/targeted_blse to support future research. The rest of the article is organized as follows: In Section "Previous Work" , we detail related work and motivate the need for a different approach. In Section "Projecting Sentiment Across Languages" , we describe both the sentence-level and targeted projection approaches that we propose. In Section "Experiments" , we detail the resources and experimental setup for both sentence and targeted classification. In Section "Results" , we describe the results of the two experiments, as well as perform a detailed error analysis. In Section "Case Study: Real World Deployment" , we perform a case study whose purpose is to give a more qualitative view of the models. Finally, we discuss the implications of the results in Section "Conclusion" . Previous Work Sentiment analysis has become an enormously popular task with a focus on classification approaches on individual languages, but there has not been as much work on cross-lingual approaches. In this section, we detail the most relevant work on cross-lingual sentiment analysis and lay the basis for the bilingual embedding approach we propose later. Machine Translation Based Methods Early work in cross-lingual sentiment analysis found that machine translation (MT) had reached a point of maturity that enabled the transfer of sentiment across languages. Researchers translated sentiment lexicons Mihalcea2007,Meng2012 or annotated corpora and used word alignments to project sentiment annotation and create target-language annotated corpora Banea2008,Duh2011a,Demirtas2013,Balahur2014d. Several approaches included a multi-view representation of the data Banea2010,Xiao2012 or co-training Wan2009,Demirtas2013 to improve over a naive implementation of machine translation, where only the translated version of the data is considered. There are also approaches which only require parallel data Meng2012,Zhou2016,Rasooli2017, instead of machine translation. All of these approaches, however, require large amounts of parallel data or an existing high quality translation tool, which are not always available. To tackle this issue, Barnes2016 explore cross-lingual approaches for aspect-based sentiment analysis, comparing machine translation methods and those that instead rely on bilingual vector representations. They conclude that MT approaches outperform current bilingual representation methods. Chen2016 propose an adversarial deep averaging network, which trains a joint feature extractor for two languages. They minimize the difference between these features across languages by learning to fool a language discriminator. This requires no parallel data, but does require large amounts of unlabeled data and has not been tested on fine-grained sentiment analysis. Bilingual Embedding Methods Recently proposed bilingual embedding methods Hermann2014,Chandar2014,Gouws2015 offer a natural way to bridge the language gap. These particular approaches to bilingual embeddings, however, also require large parallel corpora in order to build the bilingual space, which gives no advantage over machine translation. Another approach to creating bilingual word embeddings, which we refer to as Projection-based Bilingual Embeddings, has the advantage of requiring relatively little parallel training data while taking advantage of larger amounts of monolingual data. In the following, we describe the most relevant approaches. Mikolov2013translation find that vector spaces in different languages have similar arrangements. Therefore, they propose a linear projection which consists of learning a rotation and scaling matrix. Artetxe2016,Artetxe2017 improve upon this approach by requiring the projection to be orthogonal, thereby preserving the monolingual quality of the original word vectors. Given source embeddings $S$ , target embeddings $T$ , and a bilingual lexicon $L$ , Artetxe2016 learn a projection matrix $W$ by minimizing the square of Euclidean distances $$\operatornamewithlimits{arg\,min}_W \sum _{i} ||S^{\prime }W-T^{\prime }||_{F}^{2}\,,$$ (Eq. 13) where $S^{\prime } \in S$ and $T^{\prime } \in T$ are the word embedding matrices for the tokens in the bilingual lexicon $L$ . This is solved using the Moore-Penrose pseudoinverse $S^{\prime +} = (S^{\prime T}S^{\prime })^{-1}S^{\prime T}$ as $ W = S^{\prime +}T^{\prime }$ , which can be computed using SVD. We refer to this approach as VecMap. Lample2017 propose a similar refined orthogonal projection method to Artetxe2017, but include an adversarial discriminator, which seeks to discriminate samples from the projected space $WS$ , and the target $T$ , while the projection matrix $W$ attempts to prevent this making the projection from the source space $WS$ as similar to the target space $T$ as possible. They further refine their projection matrix by reducing the hubness problem Dinu2015, which is commonly found in high-dimensional spaces. For each projected embedding $Wx$ , they define the $k$ nearest neighbors in the target space, $\mathcal {N}_{T}$ , suggesting $k = 10$ . They consider the mean cosine similarity $r_{T}(Wx)$ between a projected embedding $Wx$ and its $k$ nearest neighbors $$r_{T}(Wx) = \frac{1}{k} \sum _{y \in \mathcal {N}_{T}(Wx) } \cos (Wx,y)$$ (Eq. 15) as well as the mean cosine of a target word $y$ to its neighborhood, which they denote by $r_{S}$ . In order to decrease similarity between mapped vectors lying in dense areas, they introduce a cross-domain similarity local scaling term (CSLS) $$\textrm {CSLS}(Wx,y) = 2 \cos (Wx,y) - r_{T}(Wx) - r_{S}(y)\,,$$ (Eq. 16) which they find improves accuracy, while not requiring any parameter tuning. Gouws2015taskspecific propose a method to create a pseudo-bilingual corpus with a small task-specific bilingual lexicon, which can then be used to train bilingual embeddings (Barista). This approach requires a monolingual corpus in both the source and target languages and a set of translation pairs. The source and target corpora are concatenated and then every word is randomly kept or replaced by its translation with a probability of 0.5. Any kind of word embedding algorithm can be trained with this pseudo-bilingual corpus to create bilingual word embeddings. Sentiment Embeddings Maas2011 first explored the idea of incorporating sentiment information into semantic word vectors. They proposed a topic modeling approach similar to latent Dirichlet allocation in order to collect the semantic information in their word vectors. To incorporate the sentiment information, they included a second objective whereby they maximize the probability of the sentiment label for each word in a labeled document. Tang2014 exploit distantly annotated tweets to create Twitter sentiment embeddings. To incorporate distributional information about tokens, they use a hinge loss and maximize the likelihood of a true $n$ -gram over a corrupted $n$ -gram. They include a second objective where they classify the polarity of the tweet given the true $n$ -gram. While these techniques have proven useful, they are not easily transferred to a cross-lingual setting. Zhou2015 create bilingual sentiment embeddings by translating all source data to the target language and vice versa. This requires the existence of a machine translation system, which is a prohibitive assumption for many under-resourced languages, especially if it must be open and freely accessible. This motivates approaches which can use smaller amounts of parallel data to achieve similar results. Targeted Sentiment Analysis The methods discussed so far focus on classifying textual phrases like documents or sentences. Next to these approaches, others have concentrated on classifying aspects HuandLiu2004,Liu2012,Pontiki2014 or targets Zhang2015,Zhang2016,Tang2016 to assign them with polarity values. A common technique when adapting neural architectures to targeted sentiment analysis is to break the text into left context, target, and right context Zhang2015,Zhang2016, alternatively keeping the target as the final/beginning token in the respective contexts Tang2016. The model then extracts a feature vector from each context and target, using some neural architecture, and concatenates the outputs for classification. More recent approaches attempt to augment a neural network with memory to model these interactions Chen2017,Xue2018,Wang2018,Liu2018. Wang2017 explore methods to improve classification of multiple aspects in tweets, while Akhtar2018 attempt to use cross-lingual and multilingual data to improve aspect-based sentiment analysis in under-resourced languages. As mentioned before, MT has traditionally been the main approach for transferring information across language barriers BIBREF0 . But this is particularly problematic for targeted sentiment analysis, as changes in word order or loss of words created during translation can directly affect the performance of a classifier Lambert2015. Projecting Sentiment Across Languages In this section, we propose a novel approach to incorporate sentiment information into bilingual embeddings, which we first test on sentence-level cross-lingual sentiment classification. We then propose an extension in order to adapt this approach to targeted cross-lingual sentiment classification. Our model, Bilingual Sentiment Embeddings (Blse), are embeddings that are jointly optimized to represent both (a) semantic information in the source and target languages, which are bound to each other through a small bilingual dictionary, and (b) sentiment information, which is annotated on the source language only. We only need three resources: (1) a comparably small bilingual lexicon, (2) an annotated sentiment corpus in the resource-rich language, and (3) monolingual word embeddings for the two involved languages. Sentence-level Model In this section, we detail the projection objective, the sentiment objective, and finally the full objective for sentence-level cross-lingual sentiment classification. A sketch of the full sentence-level model is depicted in Figure 1 . We assume that we have two precomputed vector spaces $S = \mathbb {R}^{v \times d}$ and $T = \mathbb {R}^{v^{\prime } \times d^{\prime }}$ for our source and target languages, where $v$ ( $v^{\prime }$ ) is the length of the source vocabulary (target vocabulary) and $d$ ( $d^{\prime }$ ) is the dimensionality of the embeddings. We also assume that we have a bilingual lexicon $L$ of length $n$ which consists of word-to-word translation pairs $L$ = $\lbrace (s_{1},t_{1}), (s_{2},t_{2}),\ldots , (s_{n}, t_{n})\rbrace $ which map from source to target. In order to create a mapping from both original vector spaces $S$ and $T$ to shared sentiment-informed bilingual spaces $\mathbf {z}$ and $\mathbf {\hat{z}}$ , we employ two linear projection matrices, $M$ and $M^{\prime }$ . During training, for each translation pair in $L$ , we first look up their associated vectors, project them through their associated projection matrix and finally minimize the mean squared error of the two projected vectors. This is similar to the approach taken by Mikolov2013translation , but includes an additional target projection matrix. The intuition for including this second matrix is that a single projection matrix does not support the transfer of sentiment information from the source language to the target language. Without $M^{\prime }$ , any signal coming from the sentiment classifier (see Section UID27 ) would have no affect on the target embedding space $T$ , and optimizing $M$ to predict sentiment and projection would only be detrimental to classification of the target language. We analyze this further in Section UID63 . Note that in this configuration, we do not need to update the original vector spaces, which would be problematic with such small training data. The projection quality is ensured by minimizing the mean squared error $$\textrm {MSE} = \dfrac{1}{n} \sum _{i=1}^{n} (\mathbf {z_{i}} - \mathbf {\hat{z}_{i}})^{2}\,,$$ (Eq. 26) where $\mathbf {z_{i}} = S_{s_{i}} \cdot M$ is the dot product of the embedding for source word $s_{i}$ and the source projection matrix and $\mathbf {\hat{z}_{i}} = T_{t_{i}} \cdot M^{\prime }$ is the same for the target word $t_{i}$ . We add a second training objective to optimize the projected source vectors to predict the sentiment of source phrases. This inevitably changes the projection characteristics of the matrix $M$ , and consequently $M^{\prime }$ and encourages $M^{\prime }$ to learn to predict sentiment without any training examples in the target language. In order to train $M$ to predict sentiment, we require a source-language corpus $C_{\textrm {source}}= \lbrace (x_{1}, y_{1}), (x_{2}, y_{2}), \ldots , (x_{i}, y_{i})\rbrace $ where each sentence $x_{i}$ is associated with a label $y_{i}$ . For classification, we use a two-layer feed-forward averaging network, loosely following Iyyer2015 . For a sentence $x_{i}$ we take the word embeddings from the source embedding $S$ and average them to $\mathbf {a}_{i} \in \mathbb {R}^{d}$ . We then project this vector to the joint bilingual space $\mathbf {z}_{i} = \mathbf {a}_{i} \cdot M$ . Finally, we pass $\mathbf {z}_{i}$ through a softmax layer $P$ to obtain the prediction $\hat{y}_{i} = \textrm {softmax} ( \mathbf {z}_{i} \cdot P)$ . To train our model to predict sentiment, we minimize the cross-entropy error of the predictions $$H = - \sum _{i=1}^{n} y_{i} \log \hat{y_{i}} - (1 - y_{i}) \log (1 - \hat{y_{i}})\,.$$ (Eq. 29) In order to jointly train both the projection component and the sentiment component, we combine the two loss functions to optimize the parameter matrices $M$ , $M^{\prime }$ , and $P$ by $$J =\hspace{-14.22636pt}\sum _{(x,y) \in C_{\textrm {source}}}\hspace{2.84526pt}\sum _{(s,t) \in L}\hspace{0.0pt}\alpha H(x,y) + (1 - \alpha ) \cdot \textrm {MSE}(s,t)\,,$$ (Eq. 31) where $\alpha $ is a hyperparameter that weights sentiment loss vs. projection loss. For inference, we classify sentences from a target-language corpus $C_{\textrm {target}}$ . As in the training procedure, for each sentence, we take the word embeddings from the target embeddings $T$ and average them to $\mathbf {a}_{i} \in \mathbb {R}^{d}$ . We then project this vector to the joint bilingual space $\mathbf {\hat{z}}_{i} = \mathbf {a}_{i} \cdot M^{\prime }$ . Finally, we pass $\mathbf {\hat{z}}_{i}$ through a softmax layer $P$ to obtain the prediction $\hat{y}_{i} = \textrm {softmax} ( \mathbf {\hat{z}}_{i} \cdot P)$ . Targeted Model In our targeted model, we assume that the list of sentiment targets as they occur in the text is given. These can be extracted previously either by using domain knowledge Liu2005, by using a named entity recognizer Zhang2015 or by using a number of aspect extraction techniques Zhou2012. Given these targets, the task is reduced to classification. However, what remains is how to represent the target, to learn to subselect the information from the context which is relevant, how to represent this contextual information, and how to combine these representations in a meaningful way that enables us to classify the target reliably. Our approach to adapt the Blse model to targeted sentiment analysis, which we call Split (depicted in Figure 2 ), is similar to the method proposed by Zhang2016 for gated recurrent networks. For a sentence with a target $a$ , we split the sentence at $a$ in order to get a left and right context, $\textrm {con}_\ell (a)$ and $\textrm {con}_r(a)$ respectively. Unlike the approach from Zhang2016, we do not use recurrent neural networks to create a feature vector, as Atrio2019 showed that, in cross-lingual setups, they overfit too much to word order and source-language specific information to perform well on our tasks. Therefore, we instead average each left context $\textrm {con}_\ell (a_i)$ , right context $\textrm {con}_r(a_i)$ , and target $a_{i}$ separately. Although averaging is a simplified approach to create a compositional representation of a phrase, it has been shown to work well for sentiment Iyyer2015,Barnes2017. After creating a single averaged vector for the left context, right context, and target, we concatenate them and use these as input for the softmax classification layer $T \in \mathbb {R}^{d \times 3}$ , where $d$ is the dimensionality of the input vectors. The model is trained on the source language sentiment data using $M$ to project, and then tested by replacing $M$ with $M^{^{\prime }}$ , similar to the sentence-level model. Experiments In this section, we describe the resources and datasets, as well as the experimental setups used in both the sentence-level (Experiment 1 in Subsection "Setting for Experiment 1: Sentence-level Classification" ) and targeted (Experiment 2 in Subsection "Setting for Experiment 2: Targeted Classification" ) experiments. Datasets and Resources The number of datasets and resources for under-resourced languages are limited. Therefore, we choose a mixture of resource-rich and under-resourced languages for our experiments. We treat the resource-rich languages as if they were under-resourced by using similar amounts of parallel data. To evaluate our proposed model at sentence-level, we conduct experiments using four benchmark datasets and three bilingual combinations. We use the OpeNER English and Spanish datasets Agerri2013 and the MultiBooked Catalan and Basque datasets BIBREF1 . All datasets contain hotel reviews which are annotated for targeted sentiment analysis. The labels include Strong Negative ( $--$ ), Negative ( $-$ ), Positive ( $+$ ), and Strong Positive ( $++$ ). We map the aspect-level annotations to sentence level by taking the most common label and remove instances of mixed polarity. We also create a binary setup by combining the strong and weak classes. This gives us a total of six experiments. The details of the sentence-level datasets are summarized in Table 1 . For each of the experiments, we take 70 percent of the data for training, 20 percent for testing and the remaining 10 percent are used as development data for tuning meta-parameters. We use the following corpora to set up the experiments in which we train on a source language corpus $C_{S}$ and test on a target language corpus $C_{T}$ . Statistics for all of the corpora are shown in Table 3 . We include a binary classification setup, where neutral has been removed and strong positive and strong negative have been mapped to positive and negative, as well as a multiclass setup, where the original labels are used. OpeNER Corpora: The OpeNER corpora Agerri2013 are composed of hotel reviews, annotated for aspect-based sentiment. Each aspect is annotated with a sentiment label (Strong Positive, Positive, Negative, Strong Negative). We perform experiments with the English and Spanish versions. MultiBooked Corpora: The MultiBooked corpora Barnes2018a are also hotel reviews annotated in the same way as the OpeNER corpora, but in Basque and Catalan. These corpora allow us to observe how well each approach performs on low-resource languages. SemEval 2016 Task 5: We take the English and Spanish restaurant review corpora made available by the organizers of the SemEval event Pontiki2016. These corpora are annotated for three levels of sentiment (positive, neutral, negative). USAGE Corpora: The USAGE corpora Klinger2014a are Amazon reviews taken from a number of different items, and are available in English and German. Each aspect is annotated for three levels of sentiment (positive, neutral, negative). As the corpus has two sets of annotations available, we take the annotations from annotator 1 as the gold standard. For Blse, VecMap, Muse, and MT, we require monolingual vector spaces for each of our languages. For English, we use the publicly available GoogleNews vectors. For Spanish, Catalan, and Basque, we train skip-gram embeddings using the Word2Vec toolkit with 300 dimensions, subsampling of $10^{-4}$ , window of 5, negative sampling of 15 based on a 2016 Wikipedia corpus (sentence-split, tokenized with IXA pipes Agerri2014 and lowercased). The statistics of the Wikipedia corpora are given in Table 2 . For Blse, VecMap, Muse, and Barista, we also require a bilingual lexicon. We use the sentiment lexicon from HuandLiu2004 (to which we refer in the following as Hu and Liu) and its translation into each target language. We translate the lexicon using Google Translate and exclude multi-word expressions. This leaves a dictionary of 5700 translations in Spanish, 5271 in Catalan, and 4577 in Basque. We set aside ten percent of the translation pairs as a development set in order to check that the distances between translation pairs not seen during training are also minimized during training. Setting for Experiment 1: Sentence-level Classification We compare Blse (Sections UID23 – UID30 ) to VecMap, Muse, and Barista (Section "Previous Work" ) as baselines, which have similar data requirements and to machine translation (MT) and monolingual (Mono) upper bounds which request more resources. For all models (Mono, MT, VecMap, Muse, Barista), we take the average of the word embeddings in the source-language training examples and train a linear SVM. We report this instead of using the same feed-forward network as in Blse as it is the stronger upper bound. We choose the parameter $c$ on the target language development set and evaluate on the target language test set. Upper Bound Mono. We set an empirical upper bound by training and testing a linear SVM on the target language data. Specifically, we train the model on the averaged embeddings from target language training data, tuning the $c$ parameter on the development data. We test on the target language test data. Upper Bound MT. To test the effectiveness of machine translation, we translate all of the sentiment corpora from the target language to English using the Google Translate API. Note that this approach is not considered a baseline, as we assume not to have access to high-quality machine translation for low-resource languages of interest. Baseline Unsup We compare with the unsupervised statistical machine translation approach proposed by artetxe2018emnlp. This approach uses a self-supervised method to create bilingual phrase embeddings which then populates a phrase table. Monolingual n-gram language models and an unsupervised variant of MERT are used to create a MT model which is improved through iterative backtranslation. We use the Wikipedia corpora from Section UID42 to create the unsupervised SMT system between English and the target languages and run the training proceedure with default parameters. Finally, we translate all test examples in the target languages to English. Baseline VecMap. We compare with the approach proposed by Artetxe2016 which has shown promise on other tasks, e. g., word similarity. In order to learn the projection matrix $W$ , we need translation pairs. We use the same word-to-word bilingual lexicon mentioned in Section UID23 . We then map the source vector space $S$ to the bilingual space $\hat{S} = SW$ and use these embeddings. Baseline Muse. This baseline is similar to VecMap but incorporates and adversarial objective as well as a localized scaling objective, which further improve the orthogonal refinement so that the two language spaces are even more similar. Baseline Barista. The approach proposed by Gouws2015taskspecific is another appropriate baseline, as it fulfills the same data requirements as the projection methods. The bilingual lexicon used to create the pseudo-bilingual corpus is the same word-to-word bilingual lexicon mentioned in Section UID23 . We follow the authors' setup to create the pseudo-bilingual corpus. We create bilingual embeddings by training skip-gram embeddings using the Word2Vec toolkit on the pseudo-bilingual corpus using the same parameters from Section UID42 . Our method: BLSE. Our model, Blse, is implemented in Pytorch Pytorch and the word embeddings are initialized with the pretrained word embeddings $S$ and $T$ mentioned in Section UID42 . We use the word-to-word bilingual lexicon from Section UID46 , tune the hyperparameters $\alpha $ , training epochs, and batch size on the target development set and use the best hyperparameters achieved on the development set for testing. ADAM Kingma2014a is used in order to minimize the average loss of the training batches. Ensembles. In order to evaluate to what extent each projection model adds complementary information to the machine translation approach, we create an ensemble of MT and each projection method (Blse, VecMap, Muse, Barista). A random forest classifier is trained on the predictions from MT and each of these approaches. Setting for Experiment 2: Targeted Classification For the targeted classification experiment, we compare the same models mentioned above, but adapted to the setting using the Split method from Section "Targeted Model" . A simple majority baseline sets the lower bound, while the MT-based model serves as an upper bound. We assume our models to perform between these two, as we do not have access to the millions of parallel sentences required to perform high-quality MT and particularly aim at proposing a method which is less resource-hungry. We hypothesize that cross-lingual approaches are particularly error-prone when evaluative phrases and words are wrongly predicted. In such settings, it might be beneficial for a model to put emphasis on the target word itself and learn a prior distribution of sentiment for each target independent of the context. For example, if you assume that all mentions of Steven Segal are negative in movie reviews, it is possible to achieve good results Bird2009. On the other hand, it may be that there are not enough examples of target-context pairs, and that it is better to ignore the target and concentrate only on the contexts. To analyze this, we compare our model to two simplified versions. In addition, this approach enables us to gain insight in the source of relevant information. The first is Target-only, which means that we use the model in the same way as before but ignore the context completely. This serves as a tool to understand how much model performance originates from the target itself. In the same spirit, we use a Context-only model, which ignores the target by constraining the parameters of all target phrase embeddings to be the same. This approach might be beneficial over our initial model if the prior distribution between targets was similar and the context actually carries the relevant information. As the baseline for each projection method, we assume all targets in each sentence respectively to be of the same polarity (Sent). This is generally an erroneous assumption, but can give good results if all of the targets in a sentence have the same polarity. In addition, this baseline provides us with the information about whether the models are able to handle information from different positions in the text. Experiment 1: Sentence-level Classification In Table 4 , we report the results of all four methods. Our method outperforms the other projection methods (the baselines VecMap, Muse, and Barista) on four of the six experiments substantially. It performs only slightly worse than the more resource-costly upper bounds (MT and Mono). This is especially noticeable for the binary classification task, where Blse performs nearly as well as machine translation and significantly better than the other methods. Unsup also performs similarly to Blse on the binary tasks, while giving stronger performance on the 4-class setup. We perform approximate randomization tests Yeh2000 with 10,000 runs and highlight the results that are statistically significant (*p $<$ 0.01) in Table 4 . In more detail, we see that MT generally performs better than the projection methods (79–69 $\text{F}_1$ on binary, 52–44 on 4-class). Blse (75–69 on binary, 41–30 on 4-class) has the best performance of the projection methods and is comparable with MT on the binary setup, with no significant difference on binary Basque. VecMap (67–46 on binary, 35–21 on 4-class) and Barista (61–55 on binary, 40–34 on 4-class) are significantly worse than Blse on all experiments except Catalan and Basque 4-class. Muse (67–62 on binary, 45–34 on 4-class) performs better than VecMap and Barista. On the binary experiment, VecMap outperforms Barista on Spanish (67.1 vs. 61.2) and Catalan (60.7 vs. 60.1) but suffers more than the other methods on the four-class experiments, with a maximum $\text{F}_1$ of 34.9. Barista is relatively stable across languages. Unsup performs well across experiments (76–65 on binary, 49–39 on 4-class), even performing better than MT on both Catalan tasks and Spanish 4-class. The Ensemble of MT and Blse performs the best, which shows that Blse adds complementary information to MT. Finally, we note that all systems perform worse on Basque. This is presumably due to the increased morphological complexity of Basque, as well as its lack of similarity to the source language English (Section UID102 ). We analyze three aspects of our model in further detail: 1) where most mistakes originate, 2) the effect of the bilingual lexicon, and 3) the effect and necessity of the target-language projection matrix $M^{\prime }$ . In order to analyze where each model struggles, we categorize the mistakes and annotate all of the test phrases with one of the following error classes: vocabulary (voc), adverbial modifiers (mod), negation (neg), external knowledge (know) or other. Table 5 shows the results. Vocabulary: The most common way to express sentiment in hotel reviews is through the use of polar adjectives (as in “the room was great”) or the mention of certain nouns that are desirable (“it had a pool”). Although this phenomenon has the largest total number of mistakes (an average of 72 per model on binary and 172 on 4-class), it is mainly due to its prevalence. MT performed the best on the test examples which according to the annotation require a correct understanding of the vocabulary (81 $\text{F}_1$ on binary /54 $\text{F}_1$ on 4-class), with Blse (79/48) slightly worse. Muse (76/23), VecMap (70/35), and Barista (67/41) perform worse. This suggests that Blse is better than Muse, VecMap and Barista at transferring sentiment of the most important sentiment bearing words. Negation: Negation is a well-studied phenomenon in sentiment analysis Pang2002,Wiegand2010,Zhu2014,Reitan2015 . Therefore, we are interested in how these four models perform on phrases that include the negation of a key element, for example “In general, this hotel isn't bad". We would like our models to recognize that the combination of two negative elements “isn't" and “bad" lead to a Positive label. Given the simple classification strategy, all models perform relatively well on phrases with negation (all reach nearly 60 $\text{F}_1$ in the binary setting). However, while Blse performs the best on negation in the binary setting (82.9 $\text{F}_1$ ), it has more problems with negation in the 4-class setting (36.9 $\text{F}_1$ ). Adverbial Modifiers: Phrases that are modified by an adverb, e. g., the food was incredibly good, are important for the four-class setup, as they often differentiate between the base and Strong labels. In the binary case, all models reach more than 55 $\text{F}_1$ . In the 4-class setup, Blse only achieves 27.2 $\text{F}_1$ compared to 46.6 or 31.3 of MT and Barista, respectively. Therefore, presumably, our model does currently not capture the semantics of the target adverbs well. This is likely due to the fact that it assigns too much sentiment to functional words (see Figure 6 ). Muse performs poorly on modified examples (20.3 $\text{F}_1$ ). External Knowledge Required: These errors are difficult for any of the models to get correct. Many of these include numbers which imply positive or negative sentiment (350 meters from the beach is Positive while 3 kilometers from the beach is Negative). Blse performs the best (63.5 $\text{F}_1$ ) while MT performs comparably well (62.5). Barista performs the worst (43.6). Binary vs. 4-class: All of the models suffer when moving from the binary to 4-class setting; an average of 26.8 in macro $\text{F}_1$ for MT, 31.4 for VecMap, 22.2 for Barista, 34.1 for Muse, and 36.6 for Blse. The vector projection methods (VecMap, Muse, and Blse) suffer the most, suggesting that they are currently more apt for the binary setting. We analyze how the number of translation pairs affects our model. We train on the 4-class Spanish setup using the best hyper-parameters from the previous experiment. Research into projection techniques for bilingual word embeddings Mikolov2013translation,Lazaridou2015,Artetxe2016 often uses a lexicon of the most frequent 8–10 thousand words in English and their translations as training data. We test this approach by taking the 10,000 word-to-word translations from the Apertium English-to-Spanish dictionary. We also use the Google Translate API to translate the NRC hashtag sentiment lexicon Mohammad2013 and keep the 22,984 word-to-word translations. We perform the same experiment as above and vary the amount of training data from 0, 100, 300, 600, 1000, 3000, 6000, 10,000 up to 20,000 training pairs. Finally, we compile a small hand translated dictionary of 200 pairs, which we then expand using target language morphological information, finally giving us 657 translation pairs. The macro $\text{F}_1$ score for the Hu and Liu dictionary climbs constantly with the increasing translation pairs. Both the Apertium and NRC dictionaries perform worse than the translated lexicon by Hu and Liu, while the expanded hand translated dictionary is competitive, as shown in Figure 3 . While for some tasks, e. g., bilingual lexicon induction, using the most frequent words as translation pairs is an effective approach, for sentiment analysis, this does not seem to help. Using a translated sentiment lexicon, even if it is small, gives better results. The main motivation for using two projection matrices $M$ and $M^{\prime }$ is to allow the original embeddings to remain stable, while the projection matrices have the flexibility to align translations and separate these into distinct sentiment subspaces. To justify this design decision empirically, we perform an experiment to evaluate the actual need for the target language projection matrix $M^{\prime }$ : We create a simplified version of our model without $M^{\prime }$ , using $M$ to project from the source to target and then $P$ to classify sentiment. The results of this model are shown in Figure 4 . The modified model does learn to predict in the source language, but not in the target language. This confirms that $M^{\prime }$ is necessary to transfer sentiment in our model. Additionally, we provide an analysis of a similar model to ours, but which uses $M = \mathbb {R}^{d, o}$ and $M^{\prime } = \mathbb {R}^{d^{\prime }, o}$ , where $d$ ( $d^{\prime }$ ) is the dimensionality of the original embeddings and $o$ is the label size, to directly model crosslingual sentiment, such that the final objective function is $$J =\hspace{-14.22636pt}\sum _{(x,y) \in C_{\textrm {source}}}\hspace{2.84526pt}\sum _{(s,t) \in L}\hspace{0.0pt}\alpha \cdot H(x, y) + (1 - \alpha ) \cdot || M \cdot s - M^{\prime } \cdot t ||$$ (Eq. 66) thereby simplifying the model and removing the $P$ parameter. Table 6 shows that Blse outperforms this simplified model on all tasks. In order to understand how well our model transfers sentiment information to the target language, we perform two qualitative analyses. First, we collect two sets of 100 positive sentiment words and one set of 100 negative sentiment words. An effective cross-lingual sentiment classifier using embeddings should learn that two positive words should be closer in the shared bilingual space than a positive word and a negative word. We test if Blse is able to do this by training our model and after every epoch observing the mean cosine similarity between the sentiment synonyms and sentiment antonyms after projecting to the joint space. We compare Blse with VecMap and Barista by replacing the Linear SVM classifiers with the same multi-layer classifier used in Blse and observing the distances in the hidden layer. Figure 5 shows this similarity in both source and target language, along with the mean cosine similarity between a held-out set of translation pairs and the macro $\text{F}_1$ scores on the development set for both source and target languages for Blse, Barista, and VecMap. From this plot, it is clear that Blse is able to learn that sentiment synonyms should be close to one another in vector space and antonyms should have a negative cosine similarity. While the other models also learn this to some degree, jointly optimizing both sentiment and projection gives better results. Secondly, we would like to know how well the projected vectors compare to the original space. Our hypothesis is that some relatedness and similarity information is lost during projection. Therefore, we visualize six categories of words in t-SNE, which projects high dimensional representations to lower dimensional spaces while preserving the relationships as best as possible Vandermaaten2008: positive sentiment words, negative sentiment words, functional words, verbs, animals, and transport. The t-SNE plots in Figure 6 show that the positive and negative sentiment words are rather clearly separated after projection in Blse. This indicates that we are able to incorporate sentiment information into our target language without any labeled data in the target language. However, the downside of this is that functional words and transportation words are highly correlated with positive sentiment. Finally, in order to analyze the sensitivity of the alpha parameter, we train Blse models for 30 epochs each with $\alpha $ between 0 and 1. Figure 7 shows the average cosine similarity for the translation pairs, as well as macro $\text{F}_1$ for both source and target language development data. Values near 0 lead to poor translation and consecuently poor target language transfer. There is a rather large “sweet spot” where all measures perform best and finally, the translation is optimized to the detriment of sentiment prediction in both source and target languages with values near 1. The experiments in this section have proven that it is possible to perform cross-lingual sentiment analysis without machine translation, and that jointly learning to project and predict sentiment is advantageous. This supports the growing trend of jointly training for multiple objectives Tang2014,Klinger2015,Ferreira2016. This approach has also been exploited within the framework of multi-task learning, where a model learns to perform multiple similar tasks in order to improve on a final task Collobert2011a. The main difference between the joint method proposed here and multi-task learning is that vector space projection and sentiment classification are not similar enough tasks to help each other. In fact, these two objectives compete against one another, as a perfect projection would not contain enough information for sentiment classification, and vice versa. Experiment 2: Targeted Classification Table 7 shows the macro $\text{F}_1$ scores for all cross-lingual approaches (Blse, VecMap, Muse, Barista, MT, Unsup) and all targeted approaches (Sent, Split, Context-only, and Target-only). The final column is the average over all corpora. The final row in each setup shows the macro $\text{F}_1$ for a classifier that always chooses the majority class. Blse outperforms other projection methods on the binary setup, 63.0 macro averaged $\text{F}_1$ across corpora versus 59.0, 57.9, and 51.4 for VecMap, Muse, and Barista, respectively. On the multiclass setup, however, Muse (32.2 $\text{F}_1$ ) is the best, followed by VecMap (31.0), Barista (28.1) and Blse (23.7). Unsup performs well across all experiments, achieving the best results on OpeNER ES (73.2 on binary and 42.7 on multiclass) and SemEval binary (77.1). VecMap is never the best nor the worst approach. In general, Barista performs poorly on the binary setup, but slightly better on the multiclass, although the overall performance is still weak. These results are similar to those observed in Experiment 1 for sentence classification. The Split approach to ABSA improves over the Sent baseline on 33 of 50 experiments, especially on binary (21/25), while on multiclass it is less helpful (13/25). Both Sent and Split normally outperform Context-only or Target-only approaches. This confirms the intuition that it is important to take both context and target information for classification. Additionally, the Context-only approach always performs better than Target-only, which indicates that context is more important than the prior probability of an target being positive or negative. Unlike the projection methods, MT using only the Sent representation performs well on the OpeNER and MultiBooked datasets, while suffering more on the SemEval and USAGE datasets. This is explained by the percentage of sentences that contain contrasting polarities in each dataset: between 8 and 12% for the OpeNER and Multibooked datasets, compared to 29% for SemEval or 50% for USAGE. In sentences with multiple contrasting polarities, the Sent baseline performs poorly. Finally, the general level of performance of projection-based targeted cross-lingual sentiment classification systems shows that they still lag 10+ percentage points behind MT on binary (compare MT (72.9 $\text{F}_1$ ) with Blse (63.0)), and 6+ percentage points on multiclass (MT (38.8) versus Muse (32.2)). The gap between MT and projection-based approaches is therefore larger on targeted sentiment analysis than at sentence-level. We perform a manual analysis of the targets misclassified by all systems on the OpeNER Spanish binary corpus (see Table 8 ), and found that the average length of misclassified targets is slightly higher than that of correctly classified targets, except for with VecMap. This indicates that averaging may have a detrimental effect as the size of the targets increases. With the MT upperbounds, there is a non-negligible amount of noise introduced by targets which have been incorrectly translated (0.05% OpeNER ES, 6% MultiBooked EU, 2% CA, 2.5% SemEval, 1% USAGE). We hypothesize that this is why MT with Context-only performs better than MT with Split. This motivates further research with projection-based methods, as they do not suffer from translation errors. The confusion matrices of the models on the SemEval task, shown in Figure 8 , show that on the multilabel task, models are not able to learn the neutral class. This derives from the large class imbalance found in the data (see Table 3 ). Similarly, models do not learn the Strong Negative class on the OpeNER and MultiBooked datasets. Motivation The performance of machine learning models on different target languages depends on the amount of data available, the quality of the data, and characteristics of the target language, e. g., morphological complexity. In the following, we analyze these aspects. There has been previous work that has observed target-language specific differences in multilingual dependency parsing Zeljko2016, machine translation Johnson2017, and language modeling Cotterell2018,Gerz2018. We are not aware of any work in cross-lingual sentiment analysis that explores the relationship between target language and performance in such depth and aim at improving this situation in the following. Additionally, the effect of domain differences when performing cross-lingual tasks has not been studied in depth. Hangya2018 propose domain adaptation methods for cross-lingual sentiment classification and bilingual dictionary induction. They show that creating domain-specific cross-lingual embeddings improves the classification for English-Spanish. However, the source-language training data used to train the sentiment classifier is taken from the same domain as the target-language test data. Therefore, it is not clear what the effect of using source-language training data from different domains would be. We analyzed the model presented in Section "Sentence-level Model" in a domain adaptation setup, including the impact of domain differences Barnes2018c. The main result was that our model performs particularly well on more distant domains, while other approaches Chen2012,Ziser2017 performed better when the source and target domains were not too dissimilar. In the following, we transfer this analysis to the target-based projection model in a real-world case study which mimics a user searching for the sentiment on touristic attractions. In order to analyze how well these methods generalize to new languages and domains, we deploy the targeted Blse, Muse, VecMap and MT models on tweets in ten Western European languages with training data from three different domains. Additionally, we include experiments with the Unsup models for a subset of the languages. English is the source language in all experiments, and we test on each of the ten target languages and attempt to answer the following research questions: How much does the amount of monolingual data available to create the original embeddings effect the final results? How do features of the target language, i. e. similarity to source language or morphological complexity, affect the performance? How do domain mismatches between source-language training and target-language test data affect the performance? Section "Discussion" addresses our findings regarding these questions and demonstrates that 1) the amount of monolingual data does not correlate with classification results, 2) language similarity between the source and target languages based on word and character n-gram distributions predicts the performance of Blse on new datasets, and 3) domain mismatch has more of an effect on the multiclass setup than binary. Experimental Setup We collect tweets directed at a number of tourist attractions in European cities using the Twitter API in 10 European languages, including several under-resourced languages (English, Basque, Catalan, Galician, French, Italian, Dutch, German, Danish, Swedish, and Norwegian). We detail the data collection and annotation procedures in Section UID85 . For classification, we compare MT the best performing projection-based methods (Blse, Muse, VecMap) using the Split method, detailed further in Section UID94 . As we need monolingual embeddings for all projection-based approaches, we create skipgram embeddings from Wikipedia dumps, detailed in Section UID91 . As an experimental setting to measure the effectiveness of targeted cross-lingual sentiment models on a large number of languages, we collect and annotate small datasets from Twitter for each of the target languages, as well as a larger dataset to train the models in English. While it would be possible to only concentrate our efforts on languages with existing datasets in order to enable evaluation, this could give a distorted view of how well these models generalize. In order to reduce the possible ambiguity of the tourist attractions, we do not include those that have two or more obvious senses, e. g., Barcelona could refer either to the city or the football team. In order to obtain a varied sample of tweets with subjective opinions, we download tweets that contain mentions of these tourist attractions as well as one of several emoticons or keywords. This distant supervision technique has been used to create sentiment lexicons Mohammad2016, semi-supervised training data Felbo2017, and features for a classifier Turney2003. We then remove any tweets that are less than 7 words long or which contain more than 3 hashtags or mentions. This increases the probability that a tweet text contains sufficient information for our use case setting. We manually annotate all tweets for its polarity toward the target to insure the quality of the data. Note that we only annotate the sentiment towards the predefined list of targets, which leads to a single annotated target per tweet. Any tweets that have unclear polarity towards the target are assigned a neutral label. This produces the three class setup that is commonly used in the SemEval tasks Nakov2013,Nakov2016. Annotators were master's and doctoral students between 27 and 35 years old. All had either native or C1 level fluency in the languages of interest. Finally, for a subset of tweets in English, Catalan, and Basque two annotators classify each tweet. Table 11 shows three example tweets from English. Table 10 depicts the number of annotated targets for all languages, as well as inter-annotator agreement using Cohen's $\kappa $ . The neutral class is the largest in all languages, followed by positive, and negative. These distributions are similar to those found in other Twitter crawled datasets Nakov2013,Nakov2016. We calculate pairwise agreement on a subset of languages using Cohen's $\kappa $ . The scores reflect a good level of agreement (0.62, 0.60, and 0.61 for English, Basque, and Catalan, respectively). We collect Wikipedia dumps for ten languages; namely, Basque, Catalan, Galician, French, Italian, Dutch, German, Danish, Swedish, and Norwegian. We then preprocess them using the Wikiextractor script, and sentence and word tokenize them with either IXA pipes Agerri2014 (Basque, Galician, Italian, Dutch, and French), Freeling Padro2010 (Catalan), or NLTK Loper2002 (Norwegian, Swedish, Danish). For each language we create Skip-gram embeddings with the word2vec toolkit following the pipeline and parameters described in Section UID42 . This process gives us 300 dimensional vectors trained on similar data for all languages. We assume that any large differences in the embedding spaces derive from the size of the data and the characteristics of the language itself. Following the same criteria laid out in Section UID46 , we create projection dictionaries by translating the Hu and Liu dictionary HuandLiu2004 to each of the target languages and keeping only translations that are single word to single word. The statistics of all Wikipedia corpora, embeddings, and projection dictionaries are shown in Table 12 . Since we predetermine the sentiment target for each tweet, we can perform targeted experiments without further annotation. We use the Split models described in Section "Targeted Model" . Our model is the targeted Blse models described in Section "Targeted Model" . Additionally, we compare to the targeted Muse, VecMap, and MT models, as well as an Ensemble classifier that uses the predictions from Blse and MT before taking the largest predicted class for classification (see Section "Setting for Experiment 1: Sentence-level Classification" for details). Finally, we set a majority baseline by assigning the most common label (neutral) to all predictions. All models are trained for 300 epochs with a learning rate of 0.001 and $\alpha $ of 0.3. We train the five models on the English data compiled during this study, as well as on the USAGE, and SemEval English data (the details can be found in Table 3 ) and test the models on the target-language test set. Results Table 13 shows the macro $\text{F}_1$ scores for all cross-lingual targeted sentiment approaches (Blse, Muse, VecMap, MT) trained on English data and tested on the target-language using the Split method proposed in "Targeted Model" . The final column is the average over all languages. Given the results from the earlier experiments, we hypothesize that MT should outperform Muse, VecMap and Blse for most of the languages. On the binary setup, Blse outperforms all other cross-lingual methods including MT and Unsup, with 56.0 macro averaged $\text{F}_1$ across languages versus 48.7, 49.4, and 48.9 for Muse, VecMap, and MT respectively (54.1 across Basque and Catalan versus 46.0 for Unsup). Blse performs particularly well on Catalan (54.5), Italian (63.4), Swedish (65.3), and Danish (68.3). VecMap performs poorly on Galician (33.3), Italian (38.2), and Danish (43.4), but outperforms all other methods on Basque (56.4), Dutch (55.2) and Norwegian (59.0). MT performs worse than Blse and VecMap, although it does perform best for Galician (56.5). Unlike experiments in Section "Sentence-level Model" , the ensemble approach does not perform better than the individual classifiers and Muse leads to the classifier with the lowest performance overall. Unsup performs better than MT on both Basque and Catalan. On the multiclass setup, however, MT (36.6 $\text{F}_1$ ) is the best, followed by VecMap (34.1), Blse (32.6), and Muse (26.1). Compared to the experiments on hotel reviews, the average differences between models is small (2.5 percentage points between MT and VecMap, and 1.5 between VecMap and Blse). Unsup performs better than MT on Basque (40.1), but worse on Catalan (28.5). Again, all methods outperform the majority baseline. On both the binary and multiclass setups, the best overall results are obtained by testing and training on data from the same domain (56.0 $\text{F}_1$ for Blse and 36.6 $\text{F}_1$ for MT). Training MT, Muse, and VecMap on the SemEval data performs better than training on USAGE, however. An initial error analysis shows that all models suffer greatly on the negative class. This seems to suggest that negative polarity towards a target is more difficult to determine within these frameworks. A significant amount of the tweets that have negative polarity towards a target also express positive or neutral sentiment towards other targets. The averaging approach to create the context vectors does not currently allow any of the models to exclude this information, leading to poor performance on these instances. Finally, compared to the experiments performed on hotel and product reviews in Section "Experiments" , the noisy data from Twitter is more difficult to classify. Despite the rather strong majority baseline (an average of 40.5 Macro $\text{F}_1$ on binary), no model achieves more than an average of 56 Macro $\text{F}_1$ on the binary task. A marked difference is that Blse and VecMap outperform MT on the binary setup. Unlike the previous experiment, Muse performs the worst on the multiclass setup. The other projection methods obtain multiclass results similar to the previous experiment (32.6–34.1 $\text{F}_1$ here compared to 23.7–31.0 $\text{F}_1$ previously). Discussion In this section, we present an error analysis. Specifically, Table 14 shows examples where Blse correctly predicts the polarity of a tweet that MT and Unsup incorrectly predict, and vice versa, as well as examples where all models are incorrect. In general, in examples where Blse outperforms MT and Unsup, the translation-based approaches often mistranslate important sentiment words, which leads to prediction errors. In the first Basque tweet, for example, “#txindoki igo gabe ere inguruaz goza daiteke... zuek joan tontorrera eta utzi arraroei gure kasa...”, Unsup incorrectly translates the most important sentiment word in the tweet “goza” (enjoy) to “overlook” and subsequently incorrectly predicts that the polarity towards txindoki is negative. Tweets that contain many out-of-vocabulary words or non-standard spelling (due to dialectal differences, informal writing, etc.), such as the third tweet in Table 14 , “kanpora jun barik ehko asko: anboto, txindoki”, are challenging for all models. In this example “jun” is a non-standard spelling of “joan” (go), “barik” is a Bizcayan Basque variant of “gabe” (without) , and “ehko” is an abbreviation of “Euskal Herriko” (Basque Country's). These lead to poor translations for MT and Unsup, but pose a similar out-of-vocabulary problem for Blse. In order to give a more qualitative view of the targeted model, Figure 9 shows t-sne projections of the bilingual vector space before and after training on the Basque binary task, following the same proceedure mentioned in Section UID68 . As in the sentence-level experiment, there is a separation of the positive and negative sentiment words, although it is less clear for targeted sentiment. This is not surprising, as a targeted model must learn not only the prior polarity of words, but how they interact with targets, leading to a more context-dependent representation of sentiment words. Finally, we further analyze the effects of three variables that are present in cross-lingual sentiment analysis: a) availability of monolingual unlabeled data, b) similarity of source and target languages, and c) domain shift between the source language training data and the target language test data. We pose the question of what the relationship is between the amount of available monolingual data to create the embedding spaces and the classification results of the models. If the original word embedding spaces are not of high quality, this could make it difficult for the projection-based models to create useful features. In order to test this, we perform ablation experiments by training target-language embeddings on varying amounts of data ( $1 \times 10^{4}$ to $5 \times 10^{9}$ tokens) and testing the models replacing the full target-language embeddings with these. We plot the performance of the models as a function of available monolingual data in Figure 10 . Figure 10 shows that nearly all models, with the exception of Norwegian, perform poorly with very limited monolingual training data ( $1\times 10^{4}$ ) and improve, although erratically, with more training data. Interestingly, the models require little data to achieve results comparable to using the all tokens to train the embeddings. A statistical analysis of the amount of unlabeled data available and the performance of Blse, Muse, VecMap (Pearson's $r$ = $-0.14$ , $-0.27$ , $0.08$ , respectively) reveals no statistically significant correlation between them. This seems to indicate that all models are not sensitive to the amount of monolingual training data available in the target language. One hypothesis to different results across languages is that the similarity of the source and target language has an effect on the final classification of the models. In order to analyze this, we need a measure that models pairwise language similarity. Given that the features we use for classification are derived from distributional representations, we model similarity as a function of 1) universal POS-tag n-grams which represent the contexts used during training, and 2) character n-grams, which represent differences in morphology. POS-tag n-grams have previously been used to classify genre Fang2010, improve statistical machine translation Lioma2005, and the combination of POS-tag and character n-grams have proven useful features for identifying the native language of second language writers in English Kulmizev2017. This indicates that these are useful features for characterizing a language. In this section we calculate the pairwise similarity between all languages and then check whether this correlates with performance. After POS-tagging the test sentences obtained from Twitter using the universal part of speech tags Petrov2012, we calculate the normalized frequency distribution $P_{l}$ for the POS-tag trigrams and $C_{l}$ for character trigrams for each language $l$ in $L = \lbrace \textrm {Danish, Swedish, Norwegian, Italian, Basque, Catalan, French, Dutch, Galician,}$ $\textrm {German, English}\rbrace $ . We then compute the pairwise cosine similarity between $\cos (A, B) = \frac{A \cdot B}{||A|| \: ||B||} $ where $A$ is the concatenation of $P_{l_{i}}$ and $C_{l_{i}}$ for language $l_{i}$ and $B$ is the concatenation of $P_{l_{j}}$ and $C_{l_{j}}$ for language $l_{j}$ . The pairwise similarities in Figure 11 confirm to expected similarities, and language families are clearly grouped (Romance, Germanic, Scandinavian, with Basque as an outlier that has no more than 0.47 similarity with any language). This confirms the use of our similarity metric for our purposes. We plot model performance as a function of language similarity in Figure 12 . To measure the correlation between language similarity and performance, we calculate Pearson's $r$ and find that for Blse there is a strong correlation between language similarity and performance, $r = 0.76$ and significance $p < 0.01$ . Muse, VecMap and MT do not show these correlations ( $r$ = 0.41, 0.24, 0.14, respectively). For MT this may be due to robust machine translation available in less similar languages according to our metric, e. g., German-English. For Muse and VecMap, however, it is less clear why it does not follow the same trend as Blse. In this section, we determine the effect of source-language domain on the cross-lingual sentiment classification task. Specifically, we use English language training data from three different domains (Twitter, restaurant reviews, and product reviews) to train the cross-lingual classifiers, and then test on the target-language Twitter data. In monolingual sentiment analysis, one would expect to see a drop when moving to more distant domains. In order to analyze the effect of domain similarity further, we test the similarity of the domains of the source-language training data using Jensen-Shannon Divergence, which is a smoothed, symmetric version of the Kullback-Leibler Divergence, $D_{KL}(A||B) = \sum _{i}^{N} a_{i} \log \frac{a_{i}}{b_{i}}$ . Kullback-Leibler Divergence measures the difference between the probability distributions $A$ and $B$ , but is undefined for any event $a_{i} \in A$ with zero probability, which is common in term distributions. Jensen-Shannon Divergence is then $ D_{JS}(A,B) = \frac{1}{2} \Big [ D_{KL}(A||B) + D_{KL}(B||A) \Big ]\,. $ Our similarity features are probability distributions over terms $t \in \mathbb {R}^{|V|}$ , where $t_{i}$ is the probability of the $i$ -th word in the vocabulary $V$ . For each domain, we create frequency distributions of the most frequent 10,000 unigrams that all domains have in common and measure the divergence with $D_{JS}$ . The results shown in Table 15 indicate that both the SemEval and USAGE datasets are relatively distinct from the Twitter data described in Section UID85 , while they are more similar to each other. Additionally, we plot the results of all models with respect to the training domain in Figure 13 . We calculate Pearson's $r$ on the correlation between domain and model performance, shown in Table 16 . On the binary setup, the results show a negligible correlation for Blse (0.32), with no significant correlation for Muse, VecMap or MT. This suggests that the models are relatively robust to domain noise, or rather that there is so much other noise found in the approaches that domain is less relevant. On the multiclass setup, however, there is a significant effect for all models. This indicates that the multiclass models presented here are less robust than the binary models. Both the SemEval and USAGE corpora differ equally from the Twitter data given the metric defined here. The fact that models trained on SemEval tend to perform better than those trained on USAGE, therefore, seems to be due to the differences in label distribution, rather than to differences in domain. These label distributions are radically different in the multiclass setup, as the English Twitter data has a 30/50/20 distribution over Positive, Neutral, and Negative labels (67/1/32 and 68/4/28 for USAGE and SemEval, respectively). Both undersampling and oversampling help, but the performance is still worse than training on in-domain data. The case study which we presented in this section showed results of deploying the models from Section "Projecting Sentiment Across Languages" to real world Twitter data, which we collect and annotate for targeted sentiment analysis. The analysis of different phenomena revealed that for binary targeted sentiment analysis, Blse performs better than machine translation on noisy data from social media, although it is sensitive to differences between source and target languages. Finally, there is little correlation between performance on the cross-lingual sentiment task and the amount of unlabeled monolingual data used to create the original embeddings spaces which goes against our expectations. Unlike the experiments in Section "Sentence-level Model" , the ensemble classifier employed here was not able to improve the results. We assume that the small size of the datasets in this experiment does not enable the classifier to learn which features are useful in certain contexts. One common problem that appears when performing targeted sentiment analysis on noisy data from Twitter is that many of the targets of interest are ambiguous, which leads to false positives. Even with relatively unambiguous targets like “Big Ben”, there are a number of entities that can be referenced; Ben Rothlisberger (an American football player), an English language school in Barcelona, and many others. In order to deploy a full sentiment analysis system on Twitter data, it will be necessary to disambiguate these mentions before classifying the tweets, either as a preprocessing step or jointly. In sentiment analysis, it is not yet common to test a model on multiple languages, despite the fact that current state-of-the-art models are often theoretically language-agnostic. This section shows that good performance in one language does not guarantee that a model transfers well to other languages, even given similar resources. We hope that future work in sentiment analysis will make better use of the available test datasets. Conclusion With this article, we have presented a novel projection-based approach to targeted cross-lingual sentiment analysis. The central unit of the proposed method is Blse which enables the transfer of annotations from a source language to a non-annotated target language. The only input it relies on are word embeddings (which can be trained without manual labeling by self-annotation) and a comparably small translation dictionary which connects the semantics of the source and the target language. In the binary classification setting (automatic labeling of sentences or documents), Blse constitutes a novel state of the art on several language and domain pairs. For a more fine-grained classification to four sentiment labels, Barista and Muse perform slightly better. The predictions in all settings are complementary to the strong upper bound of employing machine translations: in an ensemble, even this resource-intense approach is inferior. The transfer from classification to target-level analysis revealed additional challenges. The performance is lower, particularly for the 4-class setting. Our analyses show that mapping of sentence predictions to the aspects mentioned in each sentence with a machine translation model is a very challenging empirical upper bound – the difference in performance compared to projection-based methods is greater here than for the sentence-classification setting. However, we showed that in resource-scarce environments, Blse constitutes the current state of the art for binary target-level sentiment analysis when incorporated in a deep learning architecture which is informed about the aspect. Muse performs better in the same architecture for the 4-class setting. Our analysis further showed that the neural network needs to be informed about both the aspect and the context – limiting the information to a selection of these sentence parts strongly underperforms the combined setting. That also demonstrates that the model does not rely on prior distributions of aspect mentions. The final experiment in the paper is a real-world deployment of the target-level sentiment analysis system in multilingual setting with 10 languages, where the assumption is that the only supervision is available in English (which is not part of the target languages). We learned here that it is important to have access to in-domain data (even for cross-lingual projection), especially in the multiclass setting. Binary classification however, which might often be sufficient for real-world applications, is more robust to domain changes. Further, machine translation is less sensitive to language dissimilarities, unlike projection-based methods. The amount of available unlabeled data to create embeddings plays a role in the final performance of the system, although only to a minor extent. The current performance of the projection-based techniques still lags behind state-of-the-art MT approaches on most tasks, indicating that there is still much work to be done. While general bilingual embedding techniques do not seem to incorporate enough sentiment information, they are able to retain the semantics of their word vectors to a large degree even after projection. We hypothesize that the ability to retain the original semantics of the monolingual spaces leads to Muse performing better than MT on multiclass targeted sentiment analysis. The joint approach introduced in this work suffers from the degradation of the original semantics space, while optimizing the sentiment information. Moving from a similarity-based loss to a ranking loss, where the model must predict a ranked list of most similar translations could improve the model, but would require further resource development cross-lingually, as a simple bilingual dictionary would not provide enough information. One problem that arises when using bilingual embeddings instead of machine translation is that differences in word order are no longer handled BIBREF2 . Machine translation models, on the other hand, always include a reordering element. Nonetheless, there is often a mismatch between the real source language word order and the translated word order. In this work, we avoided the problem by using a bag-of-embeddings representation, but Barnes2017 found that the bag-of-embeddings approach does not perform as well as approaches that take word order into account, e. g., Lstms or Cnns. We leave the incorporation of these classifiers into our framework for future work. Unsupervised machine translation Artetxe2018,Lample2018,artetxe2018emnlp shows great promise for sentence-level classification. Like MT, however, it performs worse on noisy data, such as tweets. Therefore, users who want to apply targeted cross-lingual approaches to noisy data should consider currently consider using embedding projection methods, such as Blse. Future work on adapting unsupervised machine translation to noisy text may provide another solution for low-resource NLP. The authors thank Patrik Lambert, Toni Badia, Amaia Oliden, Itziar Etxeberria, Jessie Kief, Iris Hübscher, and Arne Øhm for helping with the annotation of the resources used in this research. This work has been partially supported by the DFG Collaborative Research Centre SFB 732 and a SGR-DTCL Predoctoral Scholarship.
VecMap, Muse, Barista
ca7e71131219252d1fab69865804b8f89a2c0a8f
ca7e71131219252d1fab69865804b8f89a2c0a8f_0
Q: How does this compare to traditional calibration methods like Platt Scaling? Text: Introduction Open information extraction (IE, sekine2006demand, Banko:2007:OIE) aims to extract open-domain assertions represented in the form of $n$ -tuples (e.g., was born in; Barack Obama; Hawaii) from natural language sentences (e.g., Barack Obama was born in Hawaii). Open IE started from rule-based BIBREF0 and syntax-driven systems BIBREF1 , BIBREF2 , and recently has used neural networks for supervised learning BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions. To calibrate open IE confidences and make them more globally comparable across different sentences, we propose an iterative rank-aware learning approach, as outlined in fig:arch. Given extractions generated by the model as training samples, we use a binary classification loss to explicitly increase the confidences of correct extractions and decrease those of incorrect ones. Without adding additional model components, this training paradigm naturally leads to a better open IE model, whose extractions can be further included as training samples. We further propose an iterative learning procedure that gradually improves the model by incrementally adding extractions to the training data. Experiments on the OIE2016 dataset BIBREF8 indicate that our method significantly outperforms both neural and non-neural models. Neural Models for Open IE We briefly revisit the formulation of open IE and the neural network model used in our paper. Problem Formulation Given sentence $\mathbf {s}=(w_1, w_2, ..., w_n)$ , the goal of open IE is to extract assertions in the form of tuples $\mathbf {r}=(\mathbf {p}, \mathbf {a}_1, \mathbf {a}_2, ..., \mathbf {a}_m)$ , composed of a single predicate and $m$ arguments. Generally, these components in $\mathbf {r}$ need not to be contiguous, but to simplify the problem we assume they are contiguous spans of words from $\mathbf {s}$ and there is no overlap between them. Methods to solve this problem have recently been formulated as sequence-to-sequence generation BIBREF4 , BIBREF5 , BIBREF6 or sequence labeling BIBREF3 , BIBREF7 . We adopt the second formulation because it is simple and can take advantage of the fact that assertions only consist of words from the sentence. Within this framework, an assertion $\mathbf {r}$ can be mapped to a unique BIO BIBREF3 label sequence $\mathbf {y}$ by assigning $O$ to the words not contained in $\mathbf {r}$ , $B_{p}$ / $I_{p}$ to the words in $\mathbf {p}$ , and $B_{a_i}$ / $I_{a_i}$ to the words in $\mathbf {a}_i$ respectively, depending on whether the word is at the beginning or inside of the span. The label prediction $\hat{\mathbf {y}}$ is made by the model given a sentence associated with a predicate of interest $(\mathbf {s}, v)$ . At test time, we first identify verbs in the sentence as candidate predicates. Each sentence/predicate pair is fed to the model and extractions are generated from the label sequence. Model Architecture and Decoding Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \mathbf {x}_t = [\mathbf {W}_{\text{emb}}(w_t), \mathbf {W}_{\text{mask}}(w_t = v)]. $ The probability of the label at each position is calculated independently using a softmax function: $ P(y_t|\mathbf {s}, v) \propto \text{exp}(\mathbf {W}_{\text{label}}\mathbf {h}_t + \mathbf {b}_{\text{label}}), $ where $\mathbf {h}_t$ is the hidden state of the last layer. At decoding time, we use the Viterbi algorithm to reject invalid label transitions BIBREF9 , such as $B_{a_2}$ followed by $I_{a_1}$ . We use average log probability of the label sequence BIBREF5 as its confidence: $$c(\mathbf {s}, v, \hat{\mathbf {y}}) = \frac{\sum _{t=1}^{|\mathbf {s}|}{\log {P(\hat{y_t}|\mathbf {s}, v)}}}{|\mathbf {s}|}.$$ (Eq. 7) The probability is trained with maximum likelihood estimation (MLE) of the gold extractions. This formulation lacks an explicit concept of cross-sentence comparison, and thus incorrect extractions of one sentence could have higher confidence than correct extractions of another sentence. Iterative Rank-Aware Learning In this section, we describe our proposed binary classification loss and iterative learning procedure. Binary Classification Loss To alleviate the problem of incomparable confidences across sentences, we propose a simple binary classification loss to calibrate confidences to be globally comparable. Given a model $\theta ^\prime $ trained with MLE, beam search is performed to generate assertions with the highest probabilities for each predicate. Assertions are annotated as either positive or negative with respect to the gold standard, and are used as training samples to minimize the hinge loss: $$\hspace{-2.84526pt}\hat{\theta } = \underset{\theta }{\operatornamewithlimits{arg\,min}}\hspace{-8.53581pt}\underset{\begin{array}{c}\mathbf {s} \in \mathcal {D}\\ v, \hat{\mathbf {y}} \in g_{\theta ^\prime }(\mathbf {s})\end{array}}{\operatorname{\mathbb {E}}}\hspace{-11.38109pt}\max {(0,1-t \cdot c_{\theta }(\mathbf {s}, v, \hat{\mathbf {y}}))},$$ (Eq. 9) where $\mathcal {D}$ is the training sentence collection, $g_{\theta ^\prime }$ represents the candidate generation process, and $t \in \lbrace 1,-1\rbrace $ is the binary annotation. $c_{\theta }(\mathbf {s}, v, \hat{\mathbf {y}})$ is the confidence score calculated by average log probability of the label sequence. The binary classification loss distinguishes positive extractions from negative ones generated across different sentences, potentially leading to a more reliable confidence measure and better ranking performance. Iterative Learning Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\mathcal {D}$ , initial model $\theta ^{(0)}$ model after convergence $\theta $ $t \leftarrow 0$ # iteration $\mathcal {E} \leftarrow \emptyset $ # generated extractions not converge $\mathcal {E} \leftarrow \mathcal {E} \cup \lbrace (\mathbf {s}, v, \hat{\mathbf {y}})|v,\hat{\mathbf {y}} \in g_{\theta ^{(t)}}(\mathbf {s}), \forall \mathbf {s} \in \mathcal {D}\rbrace $ $\theta ^{(t+1)} \leftarrow \underset{\theta }{\operatornamewithlimits{arg\,min}}\hspace{-8.53581pt}\underset{(\mathbf {s}, v, \hat{\mathbf {y}})\in \mathcal {E}}{\operatorname{\mathbb {E}}}\hspace{-8.53581pt}\max {(0,1-t \cdot c_{\theta }(\mathbf {s}, v, \hat{\mathbf {y}}))}$ $t \leftarrow t+1$ Iterative learning. Experimental Settings We use the OIE2016 dataset BIBREF8 to evaluate our method, which only contains verbal predicates. OIE2016 is automatically generated from the QA-SRL dataset BIBREF13 , and to remove noise, we remove extractions without predicates, with less than two arguments, and with multiple instances of an argument. The statistics of the resulting dataset are summarized in tab:data. We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts. We compare our method with both competitive neural and non-neural models, including RnnOIE BIBREF3 , OpenIE4, ClausIE BIBREF2 , and PropS BIBREF14 . Our implementation is based on AllenNLP BIBREF15 by adding binary classification loss function on the implementation of RnnOIE. The network consists of 4 BiLSTM layers (2 forward and 2 backward) with 64-dimensional hidden units. ELMo BIBREF16 is used to map words into contextualized embeddings, which are concatenated with a 100-dimensional predicate indicator embedding. The recurrent dropout probability is set to 0.1. Adadelta BIBREF17 with $\epsilon =10^{-6}$ and $\rho =0.95$ and mini-batches of size 80 are used to optimize the parameters. Beam search size is 5. Evaluation Results tab:expmain lists the evaluation results. Our base model (RnnOIE, sec:oie) performs better than non-neural systems, confirming the advantage of supervised training under the sequence labeling setting. To test if the binary classification loss (E.q. 9 , sec:ours) could yield better-calibrated confidence, we perform one round of fine-tuning of the base model with the hinge loss ( $+$ Binary loss in tab:expmain). We show both the results of using the confidence (E.q. 7 ) of the fine-tuned model to rerank the extractions of the base model (Rerank Only), and the end-to-end performance of the fine-tuned model in assertion generation (Generate). We found both settings lead to improved performance compared to the base model, which demonstrates that calibrating confidence using binary classification loss can improve the performance of both reranking and assertion generation. Finally, our proposed iterative learning approach (alg:iter, sec:ours) significantly outperforms non-iterative settings. We also investigate the performance of our iterative learning algorithm with respect to the number of iterations in fig:iter. The model obtained at each iteration is used to both rerank the extractions generated by the previous model and generate new extractions. We also report results of using only positive samples for optimization. We observe the AUC and F1 of both reranking and generation increases simultaneously for the first 6 iterations and converges after that, which demonstrates the effectiveness of iterative training. The best performing iteration achieves AUC of 0.125 and F1 of 0.315, outperforming all the baselines by a large margin. Meanwhile, using both positive and negative samples consistently outperforms only using positive samples, which indicates the necessity of exposure to the errors made by the system. tab:casererank compares extractions from RnnOIE before and after reranking. We can see the order is consistent with the annotation after reranking, showing the additional loss function's efficacy in calibrating the confidences; this is particularly common in extractions with long arguments. tab:casegen shows a positive extraction discovered after iterative training (first example), and a wrong extraction that disappears (second example), which shows that the model also becomes better at assertion generation. Why is the performance still relatively low? We randomly sample 50 extractions generated at the best performing iteration and conduct an error analysis to answer this question. To count as a correct extraction, the number and order of the arguments should be exactly the same as the ground truth and syntactic heads must be included, which is challenging considering that the OIE2016 dataset has complex syntactic structures and multiple arguments per predicate. We classify the errors into three categories and summarize their proportions in tab:err. “Overgenerated predicate” is where predicates not included in ground truth are overgenerated, because all the verbs are used as candidate predicates. An effective mechanism should be designed to reject useless candidates. “Wrong argument” is where extracted arguments do not coincide with ground truth, which is mainly caused by merging multiple arguments in ground truth into one. “Missing argument” is where the model fails to recognize arguments. These two errors usually happen when the structure of the sentence is complicated and coreference is involved. More linguistic information should be introduced to solve these problems. Conclusion We propose a binary classification loss function to calibrate confidences in open IE. Iteratively optimizing the loss function enables the model to incrementally learn from trial and error, yielding substantial improvement. An error analysis is performed to shed light on possible future directions. Acknowledgements This work was supported in part by gifts from Bosch Research, and the Carnegie Bosch Institute.
No reliability diagrams are provided and no explicit comparison is made between confidence scores or methods.
d77c9ede2727c28e0b5a240b2521fd49a19442e0
d77c9ede2727c28e0b5a240b2521fd49a19442e0_0
Q: What's the input representation of OpenIE tuples into the model? Text: Introduction Open information extraction (IE, sekine2006demand, Banko:2007:OIE) aims to extract open-domain assertions represented in the form of $n$ -tuples (e.g., was born in; Barack Obama; Hawaii) from natural language sentences (e.g., Barack Obama was born in Hawaii). Open IE started from rule-based BIBREF0 and syntax-driven systems BIBREF1 , BIBREF2 , and recently has used neural networks for supervised learning BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions. To calibrate open IE confidences and make them more globally comparable across different sentences, we propose an iterative rank-aware learning approach, as outlined in fig:arch. Given extractions generated by the model as training samples, we use a binary classification loss to explicitly increase the confidences of correct extractions and decrease those of incorrect ones. Without adding additional model components, this training paradigm naturally leads to a better open IE model, whose extractions can be further included as training samples. We further propose an iterative learning procedure that gradually improves the model by incrementally adding extractions to the training data. Experiments on the OIE2016 dataset BIBREF8 indicate that our method significantly outperforms both neural and non-neural models. Neural Models for Open IE We briefly revisit the formulation of open IE and the neural network model used in our paper. Problem Formulation Given sentence $\mathbf {s}=(w_1, w_2, ..., w_n)$ , the goal of open IE is to extract assertions in the form of tuples $\mathbf {r}=(\mathbf {p}, \mathbf {a}_1, \mathbf {a}_2, ..., \mathbf {a}_m)$ , composed of a single predicate and $m$ arguments. Generally, these components in $\mathbf {r}$ need not to be contiguous, but to simplify the problem we assume they are contiguous spans of words from $\mathbf {s}$ and there is no overlap between them. Methods to solve this problem have recently been formulated as sequence-to-sequence generation BIBREF4 , BIBREF5 , BIBREF6 or sequence labeling BIBREF3 , BIBREF7 . We adopt the second formulation because it is simple and can take advantage of the fact that assertions only consist of words from the sentence. Within this framework, an assertion $\mathbf {r}$ can be mapped to a unique BIO BIBREF3 label sequence $\mathbf {y}$ by assigning $O$ to the words not contained in $\mathbf {r}$ , $B_{p}$ / $I_{p}$ to the words in $\mathbf {p}$ , and $B_{a_i}$ / $I_{a_i}$ to the words in $\mathbf {a}_i$ respectively, depending on whether the word is at the beginning or inside of the span. The label prediction $\hat{\mathbf {y}}$ is made by the model given a sentence associated with a predicate of interest $(\mathbf {s}, v)$ . At test time, we first identify verbs in the sentence as candidate predicates. Each sentence/predicate pair is fed to the model and extractions are generated from the label sequence. Model Architecture and Decoding Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \mathbf {x}_t = [\mathbf {W}_{\text{emb}}(w_t), \mathbf {W}_{\text{mask}}(w_t = v)]. $ The probability of the label at each position is calculated independently using a softmax function: $ P(y_t|\mathbf {s}, v) \propto \text{exp}(\mathbf {W}_{\text{label}}\mathbf {h}_t + \mathbf {b}_{\text{label}}), $ where $\mathbf {h}_t$ is the hidden state of the last layer. At decoding time, we use the Viterbi algorithm to reject invalid label transitions BIBREF9 , such as $B_{a_2}$ followed by $I_{a_1}$ . We use average log probability of the label sequence BIBREF5 as its confidence: $$c(\mathbf {s}, v, \hat{\mathbf {y}}) = \frac{\sum _{t=1}^{|\mathbf {s}|}{\log {P(\hat{y_t}|\mathbf {s}, v)}}}{|\mathbf {s}|}.$$ (Eq. 7) The probability is trained with maximum likelihood estimation (MLE) of the gold extractions. This formulation lacks an explicit concept of cross-sentence comparison, and thus incorrect extractions of one sentence could have higher confidence than correct extractions of another sentence. Iterative Rank-Aware Learning In this section, we describe our proposed binary classification loss and iterative learning procedure. Binary Classification Loss To alleviate the problem of incomparable confidences across sentences, we propose a simple binary classification loss to calibrate confidences to be globally comparable. Given a model $\theta ^\prime $ trained with MLE, beam search is performed to generate assertions with the highest probabilities for each predicate. Assertions are annotated as either positive or negative with respect to the gold standard, and are used as training samples to minimize the hinge loss: $$\hspace{-2.84526pt}\hat{\theta } = \underset{\theta }{\operatornamewithlimits{arg\,min}}\hspace{-8.53581pt}\underset{\begin{array}{c}\mathbf {s} \in \mathcal {D}\\ v, \hat{\mathbf {y}} \in g_{\theta ^\prime }(\mathbf {s})\end{array}}{\operatorname{\mathbb {E}}}\hspace{-11.38109pt}\max {(0,1-t \cdot c_{\theta }(\mathbf {s}, v, \hat{\mathbf {y}}))},$$ (Eq. 9) where $\mathcal {D}$ is the training sentence collection, $g_{\theta ^\prime }$ represents the candidate generation process, and $t \in \lbrace 1,-1\rbrace $ is the binary annotation. $c_{\theta }(\mathbf {s}, v, \hat{\mathbf {y}})$ is the confidence score calculated by average log probability of the label sequence. The binary classification loss distinguishes positive extractions from negative ones generated across different sentences, potentially leading to a more reliable confidence measure and better ranking performance. Iterative Learning Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\mathcal {D}$ , initial model $\theta ^{(0)}$ model after convergence $\theta $ $t \leftarrow 0$ # iteration $\mathcal {E} \leftarrow \emptyset $ # generated extractions not converge $\mathcal {E} \leftarrow \mathcal {E} \cup \lbrace (\mathbf {s}, v, \hat{\mathbf {y}})|v,\hat{\mathbf {y}} \in g_{\theta ^{(t)}}(\mathbf {s}), \forall \mathbf {s} \in \mathcal {D}\rbrace $ $\theta ^{(t+1)} \leftarrow \underset{\theta }{\operatornamewithlimits{arg\,min}}\hspace{-8.53581pt}\underset{(\mathbf {s}, v, \hat{\mathbf {y}})\in \mathcal {E}}{\operatorname{\mathbb {E}}}\hspace{-8.53581pt}\max {(0,1-t \cdot c_{\theta }(\mathbf {s}, v, \hat{\mathbf {y}}))}$ $t \leftarrow t+1$ Iterative learning. Experimental Settings We use the OIE2016 dataset BIBREF8 to evaluate our method, which only contains verbal predicates. OIE2016 is automatically generated from the QA-SRL dataset BIBREF13 , and to remove noise, we remove extractions without predicates, with less than two arguments, and with multiple instances of an argument. The statistics of the resulting dataset are summarized in tab:data. We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts. We compare our method with both competitive neural and non-neural models, including RnnOIE BIBREF3 , OpenIE4, ClausIE BIBREF2 , and PropS BIBREF14 . Our implementation is based on AllenNLP BIBREF15 by adding binary classification loss function on the implementation of RnnOIE. The network consists of 4 BiLSTM layers (2 forward and 2 backward) with 64-dimensional hidden units. ELMo BIBREF16 is used to map words into contextualized embeddings, which are concatenated with a 100-dimensional predicate indicator embedding. The recurrent dropout probability is set to 0.1. Adadelta BIBREF17 with $\epsilon =10^{-6}$ and $\rho =0.95$ and mini-batches of size 80 are used to optimize the parameters. Beam search size is 5. Evaluation Results tab:expmain lists the evaluation results. Our base model (RnnOIE, sec:oie) performs better than non-neural systems, confirming the advantage of supervised training under the sequence labeling setting. To test if the binary classification loss (E.q. 9 , sec:ours) could yield better-calibrated confidence, we perform one round of fine-tuning of the base model with the hinge loss ( $+$ Binary loss in tab:expmain). We show both the results of using the confidence (E.q. 7 ) of the fine-tuned model to rerank the extractions of the base model (Rerank Only), and the end-to-end performance of the fine-tuned model in assertion generation (Generate). We found both settings lead to improved performance compared to the base model, which demonstrates that calibrating confidence using binary classification loss can improve the performance of both reranking and assertion generation. Finally, our proposed iterative learning approach (alg:iter, sec:ours) significantly outperforms non-iterative settings. We also investigate the performance of our iterative learning algorithm with respect to the number of iterations in fig:iter. The model obtained at each iteration is used to both rerank the extractions generated by the previous model and generate new extractions. We also report results of using only positive samples for optimization. We observe the AUC and F1 of both reranking and generation increases simultaneously for the first 6 iterations and converges after that, which demonstrates the effectiveness of iterative training. The best performing iteration achieves AUC of 0.125 and F1 of 0.315, outperforming all the baselines by a large margin. Meanwhile, using both positive and negative samples consistently outperforms only using positive samples, which indicates the necessity of exposure to the errors made by the system. tab:casererank compares extractions from RnnOIE before and after reranking. We can see the order is consistent with the annotation after reranking, showing the additional loss function's efficacy in calibrating the confidences; this is particularly common in extractions with long arguments. tab:casegen shows a positive extraction discovered after iterative training (first example), and a wrong extraction that disappears (second example), which shows that the model also becomes better at assertion generation. Why is the performance still relatively low? We randomly sample 50 extractions generated at the best performing iteration and conduct an error analysis to answer this question. To count as a correct extraction, the number and order of the arguments should be exactly the same as the ground truth and syntactic heads must be included, which is challenging considering that the OIE2016 dataset has complex syntactic structures and multiple arguments per predicate. We classify the errors into three categories and summarize their proportions in tab:err. “Overgenerated predicate” is where predicates not included in ground truth are overgenerated, because all the verbs are used as candidate predicates. An effective mechanism should be designed to reject useless candidates. “Wrong argument” is where extracted arguments do not coincide with ground truth, which is mainly caused by merging multiple arguments in ground truth into one. “Missing argument” is where the model fails to recognize arguments. These two errors usually happen when the structure of the sentence is complicated and coreference is involved. More linguistic information should be introduced to solve these problems. Conclusion We propose a binary classification loss function to calibrate confidences in open IE. Iteratively optimizing the loss function enables the model to incrementally learn from trial and error, yielding substantial improvement. An error analysis is performed to shed light on possible future directions. Acknowledgements This work was supported in part by gifts from Bosch Research, and the Carnegie Bosch Institute.
word embeddings
a9610cbcca813f4376fbfbf21cc14689c7fbd677
a9610cbcca813f4376fbfbf21cc14689c7fbd677_0
Q: What statistics on the VIST dataset are reported? Text: Introduction Visual storytelling and album summarization tasks have recently been of focus in the domain of computer vision and natural language processing. With the advent of new architectures, solutions for problems like image captioning and language modeling are getting better. Therefore it is only natural to work towards storytelling; deeper visual context yielding a more expressive style language, as it could potentially improve various applications involving tasks using visual descriptions and visual question answering. BIBREF0. Since the release of the VIST visual storytelling dataset BIBREF1, there have been numerous approaches modeling the behavior of stories, leveraging and extending successful sequence-to-sequence based image captioning architectures. Some of them primarily addressed means of incorporating image-sequence feature information into a narrative generating network BIBREF2, BIBREF3, while others focused on model learning patterns and behavioral orientations with changes in back-propagation methods BIBREF4, BIBREF5. Motivated by these works we now want to understand the importance of characters and their relationships in visual storytelling. Specifically, we extract characters from the VIST dataset, analyze their influence across the dataset and exploit them for paying attention to relevant visual segments during story-generation. We report our findings, discuss the directions of our ongoing work and suggest recommendations for using characters as semantics in visual storytelling. Related work BIBREF1 published the VIST dataset along with a baseline sequence-to-sequence learning model that generates stories for image sequences in the dataset. Gradually, as a result of the 2018 storytelling challenge, there have been other works on VIST. Most of them extended the encoder-decoder architecture introduced in the baseline publication by adding attention mechanisms BIBREF3, learning positionally dependent parameters BIBREF2 and using reinforcement learning based methods BIBREF4, BIBREF5. To our best knowledge, there are no prior works making use of characters for visual storytelling. The only work that uses any additional semantics for story generation is BIBREF5. They propose a hierarchical model structure which first generates a “semantic topic" for each image in the sequence and then uses that information during the generation phase. The core module of their hierarchical model is a Semantic Compositional Network (SCN) BIBREF6, a recurrent neural network variant generating text conditioned on the provided semantic concepts. Unlike traditional attention mechanisms, the SCN assembles the information on semantics directly into the neural network cell. It achieves this by extending the gate and state weight matrices to adhere to additional semantic information provided for the language generation phase. Inspired by the results SCN achieved for image and video captioning, we use it for storytelling. The semantic concepts we use are based on character frequencies and their co-occurrence information extracted from the stories of the VIST dataset. Our expectation is that the parameters of the language decoder network generating the story are dependent on the character semantics and would learn to capture linguistic patterns while simultaneously learning mappings to respective visual features of the image sequence. Data We used the Visual storytelling (VIST) dataset comprising of image sequences obtained from Flickr albums and respective annotated descriptions collected through Amazon Mechanical Turk BIBREF1. Each sequence has 5 images with corresponding descriptions that together make up for a story. Furthermore, for each Flickr album there are 5 permutations of a selected set of its images. In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories. Data ::: Character extraction We extracted characters out of the VIST dataset. To this end, we considered that a character is either “a person" or “an animal". We decided that the best way to do this would be by making use of the human-annotated text instead of images for the sake of being diverse (e.g.: detection on images would yield “person", as opposed to father). The extraction takes place as a two-step process: Identification of nouns: We first used a pretrained part-of-speech tagger BIBREF7 to identify all kinds of nouns in the annotations. Specifically, these noun categories are NN – common, singular or mass, NNS – noun, common, plural, NNP – noun, proper, singular, and NNPS – noun, proper, plural. Filtering for hypernyms: WordNet BIBREF8 is a lexical database over the English language containing various semantic relations and synonym sets. Hypernym is one such semantic relation constituting a category into which words with more specific meanings fall. From among the extracted nouns, we thereby filtered those words that have their lowest common hypernym as either “person" or “animal". Data ::: Character analysis We analyzed the VIST dataset from the perspective of the extracted characters and observed that 20,405 training, 2,349 validation and 2,768 testing data samples have at least one character present among their stories. This is approximately 50% of the data samples in the entire dataset. To pursue the prominence of relationships between these characters, we analyzed these extractions for both individual and co-occurrence frequencies. We found a total of 1,470 distinct characters with 1,333 in training, 387 in validation and 466 in the testing splits. This can be considered as an indication to the limited size of the dataset because the number of distinct characters within each split is strongly dependent on the respective size of that split. Figure FIGREF3 plots the top 30 most frequent characters in the training split of the dataset. Apart from the character “friends" there is a gradual decrease in the occurrence frequencies of the other characters from “mom" to “grandmother". Similarly, in Figure FIGREF4, which plots the top 30 most co-occurring character pairs, (“dad", “mom"), (“friend", “friends") pairs occur drastically more number of times than other pairs in the stories. This can lead to an inclination bias of the story generator towards these characters owing to the data size limitations we discussed. In the process of detecting characters, we observed also that $\sim $5000 distinct words failed on WordNet due to their misspellings (“webxites"), for being proper nouns (“cathrine"), for being an abbreviation (“geez"), and simply because they were compound words (“sing-a-long"). Though most of the models ignore these words based on a vocabulary threshold value (typically 3), we would like to comment that language model creation without accounting for these words could adversely affect the behavior of narrative generation. Model Our model in Figure FIGREF6 follows the encoder-decoder structure. The encoder module incorporates the image sequence features, obtained using a pretrained convolutional network, into a subject vector. The decoder module, a semantically compositional recurrent network (SCN) BIBREF6, uses the subject vector along with character probabilities and generates a relevant story. Model ::: Character semantics The relevant characters with respect to each data-sample are obtained as a preprocessing step. We denote characters extracted from the human-annotated stories of respective image-sequences as active characters. We then use these active characters to obtain other characters which could potentially influence the narrative to be generated. We denote these as passive characters and they can be obtained using various methods. We describe some methods we tried in Section SECREF5. The individual frequencies of these relevant characters, active and passive are then normalized by the vocabulary size and constitute the character probabilities. Model ::: Encoder Images of a sequence are initially passed through a pretrained ResNet network BIBREF9, for obtaining their features. The features extracted are then provided to the encoder module, which is a simple recurrent neural network employed to learn parameters for incorporating the subjects in the individual feature sets into a subject vector. Model ::: Decoder We use the SCN-LSTM variant of the recurrent neural network for the decoder module as shown in Figure FIGREF10. The network extends each weight matrix of the conventional LSTM to be an ensemble of a set of tag-dependent weight matrices, subjective to the character probabilities. Subject vector from the encoder is fed into the LSTM to initialize the first step. The LSTM parameters utilized when decoding are weighted by the character probabilities, for generating a respective story. Gradients $\nabla $, propagated back to the network, nudge the parameters $W$ to learn while adhering to respective character probabilities $\vec{cp}$: Consequently, the encoder parameters move towards incorporating the image-sequence features better. Experiments We report the current status of our work and the intended directions of progress we wish to make using the designed model. All experiments were performed on the VIST dataset. As mentioned in Section SECREF5, passive characters can be selected by conditioning their relationships on several factors. We explain two such methods: Experiments ::: Method 1 In the first method we naïvely select all the characters co-occurring with respective active characters. Subsequently, probabilities for these passive characters are co-occurrence counts normalized by the corpus vocabulary size. This method enables the model to learn parameters on the distribution of character relationships. Experiments ::: Method 2 In the second approach, we conditionally select a limited number of characters that collectively co-occur most with the respective active characters. This is visualized in Figure FIGREF13. The selected passive characters “girlfriend", “father" and “son" collectively co-occur in the most co-occurring characters of the active characters. $K$ in this case is a tunable hyperparameter. Discussion Both methods we are experimenting with exhibit different initial traits. We are currently working towards analyzing the character relationships learned by the models and understanding the abstract concepts that get generated as a result of such learning. We do not report any generated stories and evaluations yet as we consider that to be premature without proper examination. However, we feel the training process metrics are encouraging and provide us with enough intuition for pursuing the proposed approach to its fullest scope. Conclusion We have extracted, analyzed and exploited characters in the realm of storytelling using the VIST dataset. We have provided a model that can make use of the extracted characters to learn their relationships and thereby generate grounded and subjective narratives for respective image sequences. For future work we would like to make the encoder semantically compositional by extracting visual tags and also explore ways to improve learning of character relationships while avoiding overfitting.
In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories.
64ab2b92e986e0b5058bf4f1758e849f6a41168b
64ab2b92e986e0b5058bf4f1758e849f6a41168b_0
Q: What is the performance difference in performance in unsupervised feature learning between adverserial training and FHVAE-based disentangled speech represenation learning? Text: Introduction Nowadays speech processing is dominated by deep learning techniques. Deep neural network (DNN) acoustic models (AMs) for the tasks of automatic speech recognition (ASR) and speech synthesis have shown impressive performance for major languages such as English and Mandarin. Typically, training a DNN AM requires large amounts of transcribed data. For a large number of low-resource languages, for which very limited or no transcribed data are available, conventional methods of acoustic modeling are ineffective or even inapplicable. In recent years, there has been an increasing research interest in zero-resource speech processing, i.e., only a limited amount of raw speech data (e.g. hours or tens of hours) are given while no text transcriptions or linguistic knowledge are available. The Zero Resource Speech Challenges (ZeroSpeech) 2015 BIBREF0 , 2017 BIBREF1 and 2019 BIBREF2 precisely focus on this area. One problem tackled by ZeroSpeech 2015 and 2017 is subword modeling, learning frame-level speech representation that is discriminative to subword units and robust to linguistically-irrelevant factors such as speaker change. The latest challenge ZeroSpeech 2019 goes a step further by aiming at building text-to-speech (TTS) systems without any text labels (TTS without T) or linguistic expertise. Specifically, one is required to build an unsupervised subword modeling sub-system to automatically discover phoneme-like units in the concerned language, followed by applying the learned units altogether with speech data from which the units are inferred to train a TTS. Solving this problem may partially assist psycholinguists in understanding young children's language acquisition mechanism BIBREF2 . This study addresses unsupervised subword modeling in ZeroSpeech 2019, which is also referred to as acoustic unit discovery (AUD). It is an essential problem and forms the basis of TTS without T. The exact goal of this problem is to represent untranscribed speech utterances by discrete subword unit sequences, which is slightly different from subword modeling in the contexts of ZeroSpeech 2017 & 2015. In practice, it can be formulated as an extension to the previous two challenges. For instance, after learning the subword discriminative feature representation at frame-level, the discrete unit sequences can be inferred by applying vector quantization methods followed by collapsing consecutive repetitive symbolic patterns. In the previous two challenges, several unsupervised representation learning approaches were proposed for comparison, such as cluster posteriorgrams (PGs) BIBREF3 , BIBREF4 , BIBREF5 , DNN bottleneck features BIBREF6 , BIBREF7 , autoencoders (AEs) BIBREF8 , BIBREF9 , variational AEs (VAEs) BIBREF10 , BIBREF11 and siamese networks BIBREF12 , BIBREF13 , BIBREF14 . One major difficulty in unsupervised subword modeling is dealing with speaker variation. The huge performance degradation caused by speaker variation reported in ZeroSpeech 2017 BIBREF1 implies that speaker-invariant representation learning is crucial and remains to be solved. In ZeroSpeech 2019, speaker-independent subword unit inventory is highly desirable in building a TTS without T system. In the literature, many works focused on improving the robustness of unsupervised feature learning towards speaker variation. One direction is to apply linear transform methods. Heck et al. BIBREF5 estimated fMLLR features in an unsupervised manner. Works in BIBREF6 , BIBREF15 estimated fMLLR using a pre-trained out-of-domain ASR. Chen et al. BIBREF7 applied vocal tract length normalization (VTLN). Another direction is to employ DNNs. Zeghidour et al. BIBREF13 proposed to train subword and speaker same-different tasks within a triamese network and untangle linguistic and speaker information. Chorowski et al. BIBREF11 defined a speaker embedding as a condition of VAE decoder to free the encoder from capturing speaker information. Tsuchiya et al. BIBREF16 applied speaker adversarial training in a task related to the zero-resource scenario but transcription for a target language was used in model training. In this paper, we propose to extend our recent research findings BIBREF10 on applying disentangled speech representation learned from factorized hierarchical VAE (FHVAE) models BIBREF17 to improve speaker-invariant subword modeling. The contributions made in this study are in several aspects. First, the FHVAE based speaker-invariant learning is compared with speaker adversarial training in the strictly unsupervised scenario. Second, the combination of adversarial training and disentangled representation learning is studied. Third, our proposed approaches are evaluated on the latest challenge ZeroSpeech 2019, as well as on ZeroSpeech 2017 for completeness. To our best knowledge, direct comparison of the two approaches and their combination has not been studied before. General framework The general framework of our proposed approaches is illustrated in Figure FIGREF2 . Given untranscribed speech data, the first step is to learn speaker-invariant features to support frame labeling. The FHVAE model BIBREF17 is adopted for this purpose. FHVAEs disentangle linguistic content and speaker information encoded in speech into different latent representations. Compared with raw MFCC features, FHVAE reconstructed features conditioned on latent linguistic representation are expected to keep linguistic content unchanged and are more speaker-invariant. Details of the FHVAE structure and feature reconstruction methods are described in Section SECREF3 . The reconstructed features are fed as inputs to Dirichlet process Gaussian mixture model (DPGMM) BIBREF18 for frame clustering, as was done in BIBREF3 . The frame-level cluster labels are regarded as pseudo phone labels to support supervised DNN training. Motivated by successful applications of adversarial training BIBREF19 in a wide range of domain invariant learning tasks BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , this work proposes to add an auxiliary adversarial speaker classification task to explicitly target speaker-invariant feature learning. After speaker adversarial multi-task learning (AMTL) DNN training, softmax PG representation from pseudo phone classification task is used to infer subword unit sequences. The resultant unit sequences are regarded as pseudo transcriptions for subsequent TTS training. Speaker-invariant feature learning by FHVAEs The FHVAE model formulates the generation process of sequential data by imposing sequence-dependent and sequence-independent priors to different latent variables BIBREF17 . It consists of an inference model INLINEFORM0 and a generation model INLINEFORM1 . Let INLINEFORM2 denote a speech dataset with INLINEFORM3 sequences. Each INLINEFORM4 contains INLINEFORM5 speech segments INLINEFORM6 , where INLINEFORM7 is composed of fixed-length consecutive frames. The FHVAE model generates a sequence INLINEFORM8 from a random process as follows: (1) An s-vector INLINEFORM9 is drawn from a prior distribution INLINEFORM10 ; (2) Latent segment variables INLINEFORM11 and latent sequence variables INLINEFORM12 are drawn from INLINEFORM13 and INLINEFORM14 respectively; (3) Speech segment INLINEFORM15 is drawn from INLINEFORM16 . Here INLINEFORM17 denotes standard normal distribution, INLINEFORM18 and INLINEFORM19 are parameterized by DNNs. The joint probability for INLINEFORM20 is formulated as, DISPLAYFORM0 Since the exact posterior inference is intractable, the FHVAE introduces an inference model INLINEFORM0 to approximate the true posterior, DISPLAYFORM0 Here INLINEFORM0 and INLINEFORM1 are all diagonal Gaussian distributions. The mean and variance values of INLINEFORM2 and INLINEFORM3 are parameterized by two DNNs. For INLINEFORM4 , during FHVAE training, a trainable lookup table containing posterior mean of INLINEFORM5 for each sequence is updated. During testing, maximum a posteriori (MAP) estimation is used to infer INLINEFORM6 for unseen test sequences. FHVAEs optimize the discriminative segmental variational lower bound which was defined in BIBREF17 . It contains a discriminative objective to prevent INLINEFORM7 from being the same for all utterances. After FHVAE training, INLINEFORM0 encodes segment-level factors e.g. linguistic information, while INLINEFORM1 encodes sequence-level factors that are relatively consistent within an utterance. By concatenating training utterances of the same speaker into a single sequence for FHVAE training, the learned INLINEFORM2 is expected to be discriminative to speaker identity. This work considers applying s-vector unification BIBREF10 to generate reconstructed feature representation that keeps linguistic content unchanged and is more speaker-invariant than the original representation. Specifically, a representative speaker with his/her s-vector (denoted as INLINEFORM3 ) is chosen from the dataset. Next, for each speech segment INLINEFORM4 of an arbitrary speaker INLINEFORM5 , its corresponding latent sequence variable INLINEFORM6 inferred from INLINEFORM7 is transformed to INLINEFORM8 , where INLINEFORM9 denotes the s-vector of speaker INLINEFORM10 . Finally the FHVAE decoder reconstructs speech segment INLINEFORM11 conditioned on INLINEFORM12 and INLINEFORM13 . The features INLINEFORM14 form our desired speaker-invariant representation. Speaker adversarial multi-task learning Speaker adversarial multi-task learning (AMTL) simultaneously trains a subword classification network ( INLINEFORM0 ), a speaker classification network ( INLINEFORM1 ) and a shared-hidden-layer feature extractor ( INLINEFORM2 ), where INLINEFORM3 and INLINEFORM4 are set on top of INLINEFORM5 , as illustrated in Figure FIGREF2 . In AMTL, the error is reversely propagated from INLINEFORM6 to INLINEFORM7 such that the output layer of INLINEFORM8 is forced to learn speaker-invariant features so as to confuse INLINEFORM9 , while INLINEFORM10 tries to correctly classify outputs of INLINEFORM11 into their corresponding speakers. At the same time, INLINEFORM12 learns to predict the correct DPGMM labels of input features, and back-propagate errors to INLINEFORM13 in a usual way. Let INLINEFORM0 and INLINEFORM1 denote the network parameters of INLINEFORM2 and INLINEFORM3 , respectively. With the stochastic gradient descent (SGD) algorithm, these parameters are updated as, p p - Lpp, s s - Lss, h h -[Lph - Lsh], where INLINEFORM0 is the learning rate, INLINEFORM1 is the adversarial weight, INLINEFORM2 and INLINEFORM3 are the loss values of subword and speaker classification tasks respectively, both in terms of cross-entropy. To implement Eqt. ( SECREF6 ), a gradient reversal layer (GRL) BIBREF19 was designed to connect INLINEFORM4 and INLINEFORM5 . The GRL acts as identity transform during forward-propagation and changes the sign of loss during back-propagation. After training, the output of INLINEFORM6 is speaker-invariant and subword discriminative bottleneck feature (BNF) representation of input speech. Besides, the softmax output representation of INLINEFORM7 is believed to carry less speaker information than that without performing speaker adversarial training. Subword unit inference and smoothing Subword unit sequences for the concerned untranscribed speech utterances are inferred from softmax PG representation of INLINEFORM0 in the speaker AMTL DNN. For each input frame to the DNN, the DPGMM label with the highest probability in PG representation is regarded as the subword unit assigned to this frame. These frame-level unit labels are further processed by collapsing consecutive repetitive labels to form pseudo transcriptions. We observed non-smoothness in the inferred unit sequences by using the above methods, i.e., frame-level unit labels that are isolated without temporal repetition. Considering that ground-truth phonemes generally span at least several frames, these non-smooth labels are unwanted. This work proposes an empirical method to filter out part of the non-smooth unit labels, which is summarized in Algorithm SECREF7 . [h] Frame-level unit labels INLINEFORM0 Pseudo transcription INLINEFORM1 INLINEFORM2 }, where INLINEFORM3 , INLINEFORM4 for INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 ; INLINEFORM9 INLINEFORM10 Unit sequence smoothing Dataset and evaluation metric ZeroSpeech 2017 development dataset consists of three languages, i.e. English, French and Mandarin. Speaker information for training sets are given while unknown for test sets. The durations of training sets are INLINEFORM0 and INLINEFORM1 hours respectively. Detailed information of the dataset can be found in BIBREF1 . The evaluation metric is ABX subword discriminability. Basically, it is to decide whether INLINEFORM0 belongs to INLINEFORM1 or INLINEFORM2 if INLINEFORM3 belongs to INLINEFORM4 and INLINEFORM5 belongs to INLINEFORM6 , where INLINEFORM7 and INLINEFORM8 are speech segments, INLINEFORM9 and INLINEFORM10 are two phonemes that differ in the central sound (e.g., “beg”-“bag”). Each pair of INLINEFORM11 and INLINEFORM12 is spoken by the same speaker. Depending on whether INLINEFORM13 and INLINEFORM14 are spoken by the same speaker, ABX error rates for across-/within-speaker are evaluated separately. System setup The FHVAE model is trained with merged training sets of all three target languages. Input features are fixed-length speech segments of 10 frames. Each frame is represented by a 13-dimensional MFCC with cepstral mean normalization (CMN) at speaker level. During training, speech utterances spoken by the same speaker are concatenated to a single training sequence. During the inference of hidden variables INLINEFORM0 and INLINEFORM1 , input segments are shifted by 1 frame. To match the length of latent variables with original features, the first and last frame are padded. To generate speaker-invariant reconstructed MFCCs using the s-vector unification method, a representative speaker is selected from training sets. In this work the English speaker “s4018” is chosen. The encoder and decoder networks of the FHVAE are both 2-layer LSTM with 256 neurons per layer. Latent variable dimensions for INLINEFORM2 and INLINEFORM3 are 32. FHVAE training is implemented by using an open-source tool BIBREF17 . The FHVAE based speaker-invariant MFCC features with INLINEFORM0 and INLINEFORM1 are fed as inputs to DPGMM clustering. Training data for the three languages are clustered separately. The numbers of clustering iterations for English, French and Mandarin are INLINEFORM2 and 1400. After clustering, the numbers of clusters are INLINEFORM3 and 314. The obtained frame labels support multilingual DNN training. DNN input features are MFCC+CMVN. The layer-wise structure of INLINEFORM4 is INLINEFORM5 . Nonlinear function is sigmoid, except the linear BN layer. INLINEFORM6 contains 3 sub-networks, one for each language. The sub-network contains a GRL, a feed-forward layer (FFL) and a softmax layer. The GRL and FFL are 1024-dimensional. INLINEFORM7 also contains 3 sub-networks, each having a 1024-dimensional FFL and a softmax layer. During AMTL DNN training, the learning rate starts from INLINEFORM8 to INLINEFORM9 with exponential decay. The number of epochs is 5. Speaker adversarial weight INLINEFORM10 ranges from 0 to INLINEFORM11 . After training, BNFs extracted from INLINEFORM12 are evaluated by the ABX task. DNN is implemented using Kaldi BIBREF24 nnet3 recipe. DPGMM is implemented using tools developed by BIBREF18 . DPGMM clustering towards raw MFCC features is also implemented to generate alternative DPGMM labels for comparison. In this case, the numbers of clustering iterations for the three languages are INLINEFORM0 and 3000. The numbers of clusters are INLINEFORM1 and 596. The DNN structure and training procedure are the same as mentioned above. FHVAE model training and speaker-invariant MFCC reconstruction are performed following the configurations in ZeroSpeech 2017. The unit dataset is used for training. During MFCC reconstruction, a male speaker for each of the two languages is randomly selected as the representative speaker for s-vector unification. Our recent research findings BIBREF10 showed that male speakers are more suitable than females in generating speaker-invariant features. The IDs of the selected speakers are “S015” and “S002” in English and Surprise respectively. In DPGMM clustering, the numbers of clustering iterations are both 320. Input features are reconstructed MFCCs+ INLINEFORM0 + INLINEFORM1 . After clustering, the numbers of clusters are 518 and 693. The speaker AMTL DNN structure and training procedure follow configurations in ZeroSpeech 2017. One difference is the placement of adversarial sub-network INLINEFORM2 . Here INLINEFORM3 is put on top of the FFL in INLINEFORM4 instead of on top of INLINEFORM5 . Besides, the DNN is trained in a monolingual manner. After DNN training, PGs for voice and test sets are extracted. BNFs for test set are also extracted. Adversarial weights INLINEFORM6 ranging from 0 to INLINEFORM7 with a step size of INLINEFORM8 are evaluated on English test set. The TTS model is trained with voice dataset and their subword unit sequences inferred from PGs. TTS training is implemented using tools BIBREF27 in the same way as in the baseline. The trained TTS synthesizes speech waveforms according to unit sequences inferred from test speech utterances. Algorithm SECREF7 is applied to voice set and optionally applied to test set. Experimental results Average ABX error rates on BNFs over three target languages with different values of INLINEFORM0 are shown in Figure FIGREF11 . In this Figure, INLINEFORM0 denotes that speaker adversarial training is not applied. From the dashed (blue) lines, it can be observed that speaker adversarial training could reduce ABX error rates in both across- and within-speaker conditions, with absolute reductions of INLINEFORM1 and INLINEFORM2 respectively. The amount of improvement is in accordance with the findings reported in BIBREF16 , despite that BIBREF16 exploited English transcriptions during training. The dash-dotted (red) lines show that when DPGMM labels generated by reconstructed MFCCs are employed in DNN training, the positive impact of speaker adversarial training in across-speaker condition is relatively limited. Besides, negative impact is observed in within-speaker condition. From Figure FIGREF11 , it can be concluded that for the purpose of improving the robustness of subword modeling towards speaker variation, frame labeling based on disentangled speech representation learning is more prominent than speaker adversarial training. ABX error rates on subword unit sequences, PGs and BNFs with different values of INLINEFORM0 evaluated on English test set are shown in Figure FIGREF16 . Algorithm SECREF7 is not applied at this stage. It is observed that speaker adversarial training could achieve INLINEFORM0 and INLINEFORM1 absolute error rate reductions on PG and BNF representations. The unit sequence representation does not benefit from adversarial training. Therefore, the optimal INLINEFORM2 for unit sequences is 0. The performance gap between frame-level PGs and unit sequences measures the phoneme discriminability distortion caused by the unit inference procedure in this work. We fix INLINEFORM0 to train the TTS model, and synthesize test speech waveforms using the trained TTS. Experimental results of our submission systems are summarized in Table TABREF17 . In this Table, “+SM” denotes applying sequence smoothing towards test set unit labels. Compared with the official baseline, our proposed approaches could significantly improve unit quality in terms of ABX discriminability. Our system without applying SM achieves INLINEFORM0 and INLINEFORM1 absolute error rate reductions in English and Surprise sets. If SM is applied, while the ABX error rate increases, improvements in all the other evaluation metrics are observed. This implies that for the goal of speech synthesis, there is a trade off between quality and quantity of the learned subword units. Besides, our ABX performance is competitive to, or even better than the supervised topline. Our systems do not outperform baseline in terms of synthesis quality. One possible explanation is that our learned subword units are much more fine-grained than those in the baseline AUD, making the baseline TTS less suitable for our AUD system. In the future, we plan to investigate on alternative TTS models to take full advantage of our learned subword units. Dataset and evaluation metrics ZeroSpeech 2019 BIBREF2 provides untranscribed speech data for two languages. English is used for development while the surprise language (Indonesian) BIBREF25 , BIBREF26 is used for test only. Each language pack consists of training and test sets. The training set consists of a unit discovery dataset for building unsupervised subword models, and a voice dataset for training the TTS system. Details of ZeroSpeech 2019 datasets are listed in Table TABREF13 . There are two categories of evaluation metrics in ZeroSpeech 2019. The metrics for text embeddings, e.g. subword unit sequences, BNFs and PGs, are ABX discriminability and bitrate. Bitrate is defined as the amount of information provided in the inferred unit sequences. The metrics for synthesized speech waveforms are character error rate (CER), speaker similarity (SS, 1 to 5, larger is better) and mean opinion score (MOS, 1 to 5, larger is better), all evaluated by native speakers. Conclusions This study tackles robust unsupervised subword modeling in the zero-resource scenario. The robustness towards speaker variation is achieved by combining speaker adversarial training and FHVAE based disentangled speech representation learning. Our proposed approaches are evaluated on ZeroSpeech 2019 and ZeroSpeech 2017. Experimental results on ZeroSpeech 2017 show that both approaches are effective while the latter is more prominent, and that their combination brings further marginal improvement in across-speaker condition. Results on ZeroSpeech 2019 show that our approaches achieve significant ABX error rate reduction to the baseline system. The proposed unit sequence smoothing algorithm improves synthesis quality, at a cost of slight decrease in ABX discriminability. Acknowledgements This research is partially supported by the Major Program of National Social Science Fund of China (Ref:13&ZD189), a GRF project grant (Ref: CUHK 14227216) from Hong Kong Research Grants Council and a direct grant from CUHK Research Committee.
Unanswerable
bcd6befa65cab3ffa6334c8ecedd065a4161028b
bcd6befa65cab3ffa6334c8ecedd065a4161028b_0
Q: What are puns? Text: Introduction Humour is one of the most complex and intriguing phenomenon of the human language. It exists in various forms, across space and time, in literature and culture, and is a valued part of human interactions. Puns are one of the simplest and most common forms of humour in the English language. They are also one of the most widespread forms of spontaneous humour BIBREF0 and have found their place in casual conversations, literature, online comments, tweets and advertisements BIBREF1 , BIBREF2 . Puns are a hugely versatile and commonly used literary device and it is essential to include them in any comprehensive approach to computational humour. In this paper, we consider Hindi-English code-mixed puns and aim to automatically recover their targets. The target of a pun is its phonologically similar counterpart, the relationship to which and whose resolution (recovery) in the mind of the listener/hearer induces humour. For example, in the pun “The life of a patient of hypertension is always at steak." the word “steak" is the pun with target “stake". With India being a diverse linguistic region, there is an ever increasing usage of code-mixed Hindi-English language (along with various others) because bilingualism and even multilingualism are quite common. Consequently, we have also seen an increase in the usage of code-mixed language in online forums, advertisements etc. Code-mixed humour, especially puns have become increasingly popular because being able to use the same punning techniques but with two languages in play has opened up numerous avenues for new and interesting wordplays. With the increasing popularity and acceptance for the usage of code-mixed language, it has become important that computers are also able to process it and even decipher complex phenomena like humour. Traditional Word Sense Disambiguation (WSD) based methods cannot be used in target recovery of code-mixed puns, because they are no longer about multiple senses of a single word but about two words from two different languages. Code-switching comes with no markers, and the punning word may not even be a word in either of the languages being used. Sometimes words from the two languages can be combined to form a word which only a bilingual speaker would understand. Hence, this task on such data calls for a different set of strategies altogether. We approach this problem in two parts. First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. Second, we develop a four stage pipeline to achieve our goal - Language Identification, Pun Candidate Identification, Context Lookup and Phonetic Distance Minimization. We then test our approach on a small dataset and note that our method is successfully able to recover targets for a majority of the puns. To the best of our knowledge, this is a first attempt at dealing with code-mixed puns. The outline of the paper is as follows: Section 2 gives a brief description of the background and prior work on puns - both in the field of linguistics and in the field of computational humour, along with a brief introduction to the field of code-mixing. Section 3 defines our problem statement, our classification model on code-mixed puns, the dataset we use to test our approach, and our proposed model for the task of automatic target recovery of Hindi-English code-mixed puns. In Section 4, we analyse the performance of our model on a set of puns, and discuss the various error cases. Finally, we conclude in Section 5 with a review of our research contributions and an outline of our plans for future work. Puns Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 . Puns where the two meanings share the same pronunciation are known as homophonic or perfect puns, while those relying on similar but non-identical sounding words are known as heterophonic BIBREF4 or imperfect puns BIBREF5 . In this paper, we study automatic target recoverability of English-Hindi code mixed puns - which are more commonly imperfect puns, but may also be perfect puns in some cases. Zwicky and Zwicky zwicky1986imperfect, Sobkowiak sobkowiak1991metaphonology extensively studied various phonological variations in imperfect puns such as strong asymmetry in phoneme substitution. They note that puns show more frequent changes in vowels than in consonants because of their smaller role in target recoverability. Puns have received attention in the field of computational humour, both in generation of puns and their understanding. Generation: One of the earliest attempts at generating humour was by Lessard and Levin lessard1992computational, when they built an antonym-based system to generate Tom Swifties. Since then, we have seen various other attempts at the task with different strategies. JAPE was a system which exploited framing and phonetic relationships to automatically generate funny punning riddles, or more specifically phonologically ambiguous riddles, having noun phrase punchlines BIBREF6 . Venour venour1999computational built a system which generated HCPPs (Homonym Common Phrase Pun), simple 2 sentence puns based on associations between words occurring in common phrases. WisCraic was a system built by McKay mckay2002generation, which generated simple one-sentence puns based on semantic associations of words. Valitutti et al. valitutti2008textual attempted to automatically generate advertisements by punning on familiar expressions, with an affective connotation. Identification and understanding: Hempelmann hempelmann2003paronomasic studied target recoverability, arguing that a good model for it provides necessary groundwork for effective automatic pun generation. He worked on a theory which models prominent factors in punning such as phonological similarity and studied how these measures could be used to evaluate possible imperfect puns given an input word and a set of target words. Yokogawa yokogawa2002japanese analyzed ungrammatical Japanese puns and generated target candidates by replacing ungrammatical parts of the sentence by similar expressions. Taylor and Mazlack taylor2004computationally worked on computational recognition of word-play in the restricted domain of Knock-Knock jokes. Jaech et al. jaech2016phonological developed a computational model for target recovery of puns using techniques for automatic speech recognition, and learned phone edit probabilities in puns. Miller and Gurevych Miller2015AutomaticDO, Miller et al.miller2017semeval describe different methods on pun identification and disambiguation. Word Sense Disambiguation (WSD) based techniques are most common among the methods used. To the best of our knowledge no prior work has been attempted on code-mixed puns. Code-mixing Code-mixing is the mixing of two or more languages or language varieties. Code-mixing is now recognized as a natural part of bilingual and multilingual language use. Significant linguistic efforts have been made to understand the sociological and conversational necessity behind code-switching BIBREF7 ; for example, to understand whether it is an act of identity in a social group, or a consequence of a lack of competence in either of the languages. These papers distinguish between inter-sentence, intra-sentence and intra-word code mixing. Different types of language mixing phenomena have been discussed and defined by several linguists, with some making clear distinctions between phenomena based on certain criteria, while others use `code-mixing’ or `code-switching’ as umbrella terms to include any type of language mixing — see, e.g., Muysken muysken1995code or Gafaranga and Torras gafaranga2002interactional. In this paper, we use both these terms ‘code-mixing’ and `code-switching' interchangeably. Coming to the work on automatic analysis of code-mixed languages, there have been studies on detecting code mixing in spoken language as well as different types of short texts, such as information retrieval queries BIBREF8 , SMS messages BIBREF9 , BIBREF10 , social media data BIBREF11 and online conversations BIBREF12 . These scholars have carried out experiments for the task of language identification using language models, dictionaries, logistic regression classification, Conditional Random Fields, SVMs, and noted that approaches using contextual knowledge were most robust. King and Abney king2013labeling used weakly semi-supervised methods to perform word-level language identification. We however, use a dictionary based approach for the language identification task. While working with puns, ambiguity in language identification can be an important marker for identifying the pun, so it is more important for us to recognize all possible ambiguities rather than picking just one depending on probabilities. This ability to recognize ambiguities, and the simplicity of a dictionary-based language identification model makes it suited for this task. Methodology We focus on the task of automatically disambiguating or recovering Hindi-English code mixed puns. For this purpose, it is first necessary to understand what these puns are. Classification For the purposes of this research, we only consider puns where the ambiguity or the wordplay lies in the code-switching i.e, the pun word and its target are from different languages. For example the pun "Rivers can't hear because woh behri hoti hai." is a sentence with the pun being behri (meaning deaf) and its target being beh rahi (meaning flowing). Here, while the sentence is code-mixed, the pun word and the target both belong to the same language. We do not consider such puns for the present study. We analyze the structure of code-mixed puns with the pun word and its target belonging to different languages and propose two broad categories to classify them in - puns where the code-mixing is intra-sentential and the other where it is intra-word. Both these categories are explained below, while we evaluate only on the former category. Intra-sentential code-mixing is where code-switching occurs within a sentence. Here, the language varies at the word level. Also, each word of the sentence belongs to one or the other language. Table 1 gives examples of puns belonging to this category. In this category, code mixing is present within a word. New words are formed using Portmanteau or Blending where two or more syllables/phonemes from different languages are blended together to form a single word, resulting in a word which is phonetically similar to the target word. Table 2 illustrates examples of intra-word code-mixed puns. Dataset Most puns we hear or use in everyday conversations are rarely recorded. One of the most common resources to find recorded puns are advertisements, for example the highly creative and frequently released Amul advertisements in India BIBREF1 . Most of these are contextually integrated BIBREF0 with an image. While such puns may lose their humour out of context, it is still possible to recover their targets, so using these does not affect our task in any way To create a dataset to test our model on, we collected 518 advertisements released by Amul in the years 2014, 2015, 2017 and 2018, from their official web page. Of these, 333 were puns, including 121 code-mixed puns as defined in Section 3.1. We extracted the text of these 121 code-mixed puns and asked 3 people to disambiguate them, given just the advertisement text. All three annotators were university students in 22-23 years age group, native Hindi speakers with bilingual fluency in English. The annotators were asked to identify the location of the pun in each of the advertisements and write down the target of the pun. Any disagreements between annotators were resolved by mutual discussion. In a few cases where puns were identified to have multiple targets, we kept all such possibilities in our dataset. A few puns were identified to be non-recoverable because of the lack of contextual knowledge, while a few puns had multiple pun locations. We removed both these types from our dataset, which left us with 110 puns. Finally, we divided these 110 annotated puns into the two categories as defined in Section 3.1 thereby getting 51 advertisements categorized as intra-sentential code-mixed puns, and the rest as intra-word code-mixed puns. We use the former as our test data. Model For preprocessing the text we give as input to our system, we first tokenize the advertisement text using NLTK's BIBREF13 tokenizer and remove all punctuations. We then give the resultant tokens as input to our model, which is a 4 step process as described below: At this step, we aim to identify the language of each of the tokens in the input text by classifying them into one of the 5 categories: English, Hindi, Named Entity (NE), Out of Vocabulary (OOV), or Ambiguous (words that could belong to both English and Hindi). We use a dictionary-based lookup method to classify a word in English or Hindi. Since the input is in Roman script, to recognize Hindi words, we use a list of 30k transliterated Hindi words in Roman to their Devanagari counterparts BIBREF14 . For the English language, we collected news data from the archives of a leading Indian Newspaper, The Hindu. Data from 2012-2018 under the tags National, International, Sports, Cinema, Television was collected, amounting to 12,600 articles with 200k sentences and around 38k unique words. We use this data to build an English dictionary. Also, we used NLTK's BIBREF13 Named Entity Recognition module on the same data to get a dictionary of Named Entities. We first try to classify all tokens as English, Hindi and NE using these dictionaries. Then, words which are found in both English and Hindi are marked as Ambiguous. The words which do not fall into any of these are classified as OOV. We now identify all possible punning locations in the text. For this, we consider words on the boundaries of language change as candidates for pun locations. Then, all NEs and OOV words are added to the list of pun candidates as well. Third, if any Ambiguous words exist in the text, we consider it once as English and once as Hindi for the next steps. In this step, we contextually lookup all the candidate locations using left context and right context to get a list of all words that may occur at that position. We use bi-gram language models we built using Knesser-Ney smoothing BIBREF15 . We used the data mentioned in the previous step to build the language model for English, and 100k sentences from Hindi monolingual data from BIBREF16 to build the language models for English and Hindi respectively. As it is highly likely that the left and the right context at a pun location belong to different languages, we look at each of those separately instead of taking an intersection of the left and the right context. Lastly, at each pun location, we calculate the similarity of the word at that location with all the words that can occur at that location depending on the context and pick the most similar words as the possible targets. To compare words belonging to two different languages on a phonetic basis, we convert both of them to WX notation BIBREF17 , which denotes a standard way to represent Indian languages in the Roman script. We transliterate our identified Hindi words from Devanagari to WX notation. To convert English words to the same notation, we use the CMU phonetic dictionary , which uses a 39 phoneme set to represent North American pronunciations of English words. We build a mapping between this phoneme set and WX notation. Whenever there was no exact parallel between CMU pronouncing dictionary's notation and WX, we used the word's Indian English pronunciation to find the closest match. Once we converted all to WX notation, we use a modified version of Levenshtein Distance BIBREF18 to find most similar words. In this normalized version of Levenshtein distance, we account for a few features like aspirations (for example, /p/,/ph/) which are non-phonemic in English, vowel elongations, rhyme, same beginning or ending sounds. In case of an OOV word, since it cannot be converted to WX notation due to non-availability of any phonetic transcription, we simply find the words with the least orthographic distance when written in Roman script, using a similar measure as used for phonetic distance with a few more normalizations (for example, considering 'w' and 'v' as similar). Results and discussion We test the model explained in the previous section on our test dataset described in Section 3.2 and note that this method is correctly able to recover targets for 34 out of these 51 puns, or around 67% of the puns, which are very encouraging results for this complex task. Examples where the system performed successfully are given in Table 3 . We do a thorough error analysis below for the cases our method fails for. Conclusion and Future work To conclude, in this paper, we present a first-ever work on target recovery code-mixed puns. We study various puns where the word-play is a result of code-switching, and classify them into 2 categories - puns with intra-sentential code mixing and those with intra-word code mixing. We then propose a methodology to recover the targets for puns belonging to the former category, using only monolingual language data. We test our proposed approach on a small manually annotated dataset, and we see that our system was able to successfully recover 67% of the puns from the set. In the future, we want to perform a more comprehensive evaluation of this approach on a larger, more diverse set of puns. We want to improve and extend our approach to be able to recover intra-word code-mixed puns along with the intra-sentential ones that it handles right now. After that, the system should be extended to be able to recover all kinds of puns in code-mixed language, regardless of whether the pun itself is monolingual or code-mixed. Acknowledgements We thank the anonymous reviewers for their comments that helped improve this paper.
a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect
479fc9e6d6d80e69f425d9e82e618e6b7cd12764
479fc9e6d6d80e69f425d9e82e618e6b7cd12764_0
Q: What are the categories of code-mixed puns? Text: Introduction Humour is one of the most complex and intriguing phenomenon of the human language. It exists in various forms, across space and time, in literature and culture, and is a valued part of human interactions. Puns are one of the simplest and most common forms of humour in the English language. They are also one of the most widespread forms of spontaneous humour BIBREF0 and have found their place in casual conversations, literature, online comments, tweets and advertisements BIBREF1 , BIBREF2 . Puns are a hugely versatile and commonly used literary device and it is essential to include them in any comprehensive approach to computational humour. In this paper, we consider Hindi-English code-mixed puns and aim to automatically recover their targets. The target of a pun is its phonologically similar counterpart, the relationship to which and whose resolution (recovery) in the mind of the listener/hearer induces humour. For example, in the pun “The life of a patient of hypertension is always at steak." the word “steak" is the pun with target “stake". With India being a diverse linguistic region, there is an ever increasing usage of code-mixed Hindi-English language (along with various others) because bilingualism and even multilingualism are quite common. Consequently, we have also seen an increase in the usage of code-mixed language in online forums, advertisements etc. Code-mixed humour, especially puns have become increasingly popular because being able to use the same punning techniques but with two languages in play has opened up numerous avenues for new and interesting wordplays. With the increasing popularity and acceptance for the usage of code-mixed language, it has become important that computers are also able to process it and even decipher complex phenomena like humour. Traditional Word Sense Disambiguation (WSD) based methods cannot be used in target recovery of code-mixed puns, because they are no longer about multiple senses of a single word but about two words from two different languages. Code-switching comes with no markers, and the punning word may not even be a word in either of the languages being used. Sometimes words from the two languages can be combined to form a word which only a bilingual speaker would understand. Hence, this task on such data calls for a different set of strategies altogether. We approach this problem in two parts. First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. Second, we develop a four stage pipeline to achieve our goal - Language Identification, Pun Candidate Identification, Context Lookup and Phonetic Distance Minimization. We then test our approach on a small dataset and note that our method is successfully able to recover targets for a majority of the puns. To the best of our knowledge, this is a first attempt at dealing with code-mixed puns. The outline of the paper is as follows: Section 2 gives a brief description of the background and prior work on puns - both in the field of linguistics and in the field of computational humour, along with a brief introduction to the field of code-mixing. Section 3 defines our problem statement, our classification model on code-mixed puns, the dataset we use to test our approach, and our proposed model for the task of automatic target recovery of Hindi-English code-mixed puns. In Section 4, we analyse the performance of our model on a set of puns, and discuss the various error cases. Finally, we conclude in Section 5 with a review of our research contributions and an outline of our plans for future work. Puns Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 . Puns where the two meanings share the same pronunciation are known as homophonic or perfect puns, while those relying on similar but non-identical sounding words are known as heterophonic BIBREF4 or imperfect puns BIBREF5 . In this paper, we study automatic target recoverability of English-Hindi code mixed puns - which are more commonly imperfect puns, but may also be perfect puns in some cases. Zwicky and Zwicky zwicky1986imperfect, Sobkowiak sobkowiak1991metaphonology extensively studied various phonological variations in imperfect puns such as strong asymmetry in phoneme substitution. They note that puns show more frequent changes in vowels than in consonants because of their smaller role in target recoverability. Puns have received attention in the field of computational humour, both in generation of puns and their understanding. Generation: One of the earliest attempts at generating humour was by Lessard and Levin lessard1992computational, when they built an antonym-based system to generate Tom Swifties. Since then, we have seen various other attempts at the task with different strategies. JAPE was a system which exploited framing and phonetic relationships to automatically generate funny punning riddles, or more specifically phonologically ambiguous riddles, having noun phrase punchlines BIBREF6 . Venour venour1999computational built a system which generated HCPPs (Homonym Common Phrase Pun), simple 2 sentence puns based on associations between words occurring in common phrases. WisCraic was a system built by McKay mckay2002generation, which generated simple one-sentence puns based on semantic associations of words. Valitutti et al. valitutti2008textual attempted to automatically generate advertisements by punning on familiar expressions, with an affective connotation. Identification and understanding: Hempelmann hempelmann2003paronomasic studied target recoverability, arguing that a good model for it provides necessary groundwork for effective automatic pun generation. He worked on a theory which models prominent factors in punning such as phonological similarity and studied how these measures could be used to evaluate possible imperfect puns given an input word and a set of target words. Yokogawa yokogawa2002japanese analyzed ungrammatical Japanese puns and generated target candidates by replacing ungrammatical parts of the sentence by similar expressions. Taylor and Mazlack taylor2004computationally worked on computational recognition of word-play in the restricted domain of Knock-Knock jokes. Jaech et al. jaech2016phonological developed a computational model for target recovery of puns using techniques for automatic speech recognition, and learned phone edit probabilities in puns. Miller and Gurevych Miller2015AutomaticDO, Miller et al.miller2017semeval describe different methods on pun identification and disambiguation. Word Sense Disambiguation (WSD) based techniques are most common among the methods used. To the best of our knowledge no prior work has been attempted on code-mixed puns. Code-mixing Code-mixing is the mixing of two or more languages or language varieties. Code-mixing is now recognized as a natural part of bilingual and multilingual language use. Significant linguistic efforts have been made to understand the sociological and conversational necessity behind code-switching BIBREF7 ; for example, to understand whether it is an act of identity in a social group, or a consequence of a lack of competence in either of the languages. These papers distinguish between inter-sentence, intra-sentence and intra-word code mixing. Different types of language mixing phenomena have been discussed and defined by several linguists, with some making clear distinctions between phenomena based on certain criteria, while others use `code-mixing’ or `code-switching’ as umbrella terms to include any type of language mixing — see, e.g., Muysken muysken1995code or Gafaranga and Torras gafaranga2002interactional. In this paper, we use both these terms ‘code-mixing’ and `code-switching' interchangeably. Coming to the work on automatic analysis of code-mixed languages, there have been studies on detecting code mixing in spoken language as well as different types of short texts, such as information retrieval queries BIBREF8 , SMS messages BIBREF9 , BIBREF10 , social media data BIBREF11 and online conversations BIBREF12 . These scholars have carried out experiments for the task of language identification using language models, dictionaries, logistic regression classification, Conditional Random Fields, SVMs, and noted that approaches using contextual knowledge were most robust. King and Abney king2013labeling used weakly semi-supervised methods to perform word-level language identification. We however, use a dictionary based approach for the language identification task. While working with puns, ambiguity in language identification can be an important marker for identifying the pun, so it is more important for us to recognize all possible ambiguities rather than picking just one depending on probabilities. This ability to recognize ambiguities, and the simplicity of a dictionary-based language identification model makes it suited for this task. Methodology We focus on the task of automatically disambiguating or recovering Hindi-English code mixed puns. For this purpose, it is first necessary to understand what these puns are. Classification For the purposes of this research, we only consider puns where the ambiguity or the wordplay lies in the code-switching i.e, the pun word and its target are from different languages. For example the pun "Rivers can't hear because woh behri hoti hai." is a sentence with the pun being behri (meaning deaf) and its target being beh rahi (meaning flowing). Here, while the sentence is code-mixed, the pun word and the target both belong to the same language. We do not consider such puns for the present study. We analyze the structure of code-mixed puns with the pun word and its target belonging to different languages and propose two broad categories to classify them in - puns where the code-mixing is intra-sentential and the other where it is intra-word. Both these categories are explained below, while we evaluate only on the former category. Intra-sentential code-mixing is where code-switching occurs within a sentence. Here, the language varies at the word level. Also, each word of the sentence belongs to one or the other language. Table 1 gives examples of puns belonging to this category. In this category, code mixing is present within a word. New words are formed using Portmanteau or Blending where two or more syllables/phonemes from different languages are blended together to form a single word, resulting in a word which is phonetically similar to the target word. Table 2 illustrates examples of intra-word code-mixed puns. Dataset Most puns we hear or use in everyday conversations are rarely recorded. One of the most common resources to find recorded puns are advertisements, for example the highly creative and frequently released Amul advertisements in India BIBREF1 . Most of these are contextually integrated BIBREF0 with an image. While such puns may lose their humour out of context, it is still possible to recover their targets, so using these does not affect our task in any way To create a dataset to test our model on, we collected 518 advertisements released by Amul in the years 2014, 2015, 2017 and 2018, from their official web page. Of these, 333 were puns, including 121 code-mixed puns as defined in Section 3.1. We extracted the text of these 121 code-mixed puns and asked 3 people to disambiguate them, given just the advertisement text. All three annotators were university students in 22-23 years age group, native Hindi speakers with bilingual fluency in English. The annotators were asked to identify the location of the pun in each of the advertisements and write down the target of the pun. Any disagreements between annotators were resolved by mutual discussion. In a few cases where puns were identified to have multiple targets, we kept all such possibilities in our dataset. A few puns were identified to be non-recoverable because of the lack of contextual knowledge, while a few puns had multiple pun locations. We removed both these types from our dataset, which left us with 110 puns. Finally, we divided these 110 annotated puns into the two categories as defined in Section 3.1 thereby getting 51 advertisements categorized as intra-sentential code-mixed puns, and the rest as intra-word code-mixed puns. We use the former as our test data. Model For preprocessing the text we give as input to our system, we first tokenize the advertisement text using NLTK's BIBREF13 tokenizer and remove all punctuations. We then give the resultant tokens as input to our model, which is a 4 step process as described below: At this step, we aim to identify the language of each of the tokens in the input text by classifying them into one of the 5 categories: English, Hindi, Named Entity (NE), Out of Vocabulary (OOV), or Ambiguous (words that could belong to both English and Hindi). We use a dictionary-based lookup method to classify a word in English or Hindi. Since the input is in Roman script, to recognize Hindi words, we use a list of 30k transliterated Hindi words in Roman to their Devanagari counterparts BIBREF14 . For the English language, we collected news data from the archives of a leading Indian Newspaper, The Hindu. Data from 2012-2018 under the tags National, International, Sports, Cinema, Television was collected, amounting to 12,600 articles with 200k sentences and around 38k unique words. We use this data to build an English dictionary. Also, we used NLTK's BIBREF13 Named Entity Recognition module on the same data to get a dictionary of Named Entities. We first try to classify all tokens as English, Hindi and NE using these dictionaries. Then, words which are found in both English and Hindi are marked as Ambiguous. The words which do not fall into any of these are classified as OOV. We now identify all possible punning locations in the text. For this, we consider words on the boundaries of language change as candidates for pun locations. Then, all NEs and OOV words are added to the list of pun candidates as well. Third, if any Ambiguous words exist in the text, we consider it once as English and once as Hindi for the next steps. In this step, we contextually lookup all the candidate locations using left context and right context to get a list of all words that may occur at that position. We use bi-gram language models we built using Knesser-Ney smoothing BIBREF15 . We used the data mentioned in the previous step to build the language model for English, and 100k sentences from Hindi monolingual data from BIBREF16 to build the language models for English and Hindi respectively. As it is highly likely that the left and the right context at a pun location belong to different languages, we look at each of those separately instead of taking an intersection of the left and the right context. Lastly, at each pun location, we calculate the similarity of the word at that location with all the words that can occur at that location depending on the context and pick the most similar words as the possible targets. To compare words belonging to two different languages on a phonetic basis, we convert both of them to WX notation BIBREF17 , which denotes a standard way to represent Indian languages in the Roman script. We transliterate our identified Hindi words from Devanagari to WX notation. To convert English words to the same notation, we use the CMU phonetic dictionary , which uses a 39 phoneme set to represent North American pronunciations of English words. We build a mapping between this phoneme set and WX notation. Whenever there was no exact parallel between CMU pronouncing dictionary's notation and WX, we used the word's Indian English pronunciation to find the closest match. Once we converted all to WX notation, we use a modified version of Levenshtein Distance BIBREF18 to find most similar words. In this normalized version of Levenshtein distance, we account for a few features like aspirations (for example, /p/,/ph/) which are non-phonemic in English, vowel elongations, rhyme, same beginning or ending sounds. In case of an OOV word, since it cannot be converted to WX notation due to non-availability of any phonetic transcription, we simply find the words with the least orthographic distance when written in Roman script, using a similar measure as used for phonetic distance with a few more normalizations (for example, considering 'w' and 'v' as similar). Results and discussion We test the model explained in the previous section on our test dataset described in Section 3.2 and note that this method is correctly able to recover targets for 34 out of these 51 puns, or around 67% of the puns, which are very encouraging results for this complex task. Examples where the system performed successfully are given in Table 3 . We do a thorough error analysis below for the cases our method fails for. Conclusion and Future work To conclude, in this paper, we present a first-ever work on target recovery code-mixed puns. We study various puns where the word-play is a result of code-switching, and classify them into 2 categories - puns with intra-sentential code mixing and those with intra-word code mixing. We then propose a methodology to recover the targets for puns belonging to the former category, using only monolingual language data. We test our proposed approach on a small manually annotated dataset, and we see that our system was able to successfully recover 67% of the puns from the set. In the future, we want to perform a more comprehensive evaluation of this approach on a larger, more diverse set of puns. We want to improve and extend our approach to be able to recover intra-word code-mixed puns along with the intra-sentential ones that it handles right now. After that, the system should be extended to be able to recover all kinds of puns in code-mixed language, regardless of whether the pun itself is monolingual or code-mixed. Acknowledgements We thank the anonymous reviewers for their comments that helped improve this paper.
intra-sequential and intra-word
bc26eee4ef1c8eff2ab8114a319901695d044edb
bc26eee4ef1c8eff2ab8114a319901695d044edb_0
Q: How is dialogue guided to avoid interactions that breach procedures and processes only known to experts? Text: Introduction Recent machine learning breakthroughs in dialogue systems and their respective components have been made possible by training on publicly available large scale datasets, such as ConvAI BIBREF0, bAbI BIBREF1 and MultiWoZ BIBREF2, many of which are collected on crowdsourcing services, such as Amazon Mechanical Turk and Figure-eight. These data collection methods have the benefits of being cost-effective, time-efficient to collect and scalable, enabling the collection of large numbers of dialogues. Where this crowdsourcing method has its limitations is when specific domain expert knowledge is required, rather than general conversation. These tasks include, for example, call centre agents BIBREF3 or clerks with access to a database, as is required for tourism information and booking BIBREF2. In the near future, there will be a demand to extend this to workplace-specific tasks and procedures. Therefore, a method of gathering crowdsourced dialogue data is needed that ensures compliance with such procedures, whilst providing coverage of a wide variety of dialogue phenomena that could be observed in deployment of a trained dialogue system. Wizard-of-Oz data collections in the past have provided such a mechanism. However, these have traditionally not been scalable because of the scarcity of Wizard experts or the expense to train up workers. This was the situation with an initial study reported in BIBREF4, which was conducted in a traditional lab setting and where the Wizard (an academic researcher) had to learn, through training and reading manuals, how best to perform operations in our domain of emergency response. We present the CRWIZ Intelligent Wizard Interface that enables a crowdsourced Wizard to make intelligent, relevant choices without such intensive training by providing a restricted list of valid and relevant dialogue task actions, which changes dynamically based on the context, as the interaction evolves. Prior crowdsourced wizarded data collections have divided the dialogue up into turns and each worker's job consists of one turn utterance generation given a static dialogue context, as in the MultiWoZ dataset BIBREF2. However, this can limit naturalness of the dialogues by restricting forward planning, collaboration and use of memory that humans use for complex multi-stage tasks in a shared dynamic environment/context. Our scenario is such a complex task. Specifically, our scenario relates to using robotics and autonomous systems on an offshore energy platform to resolve an emergency and is part of the EPSRC ORCA Hub project BIBREF5. The ORCA Hub vision is to use teams of robots and autonomous intelligent systems to work on offshore energy platforms to enable cheaper, safer and more efficient working practices. An important part of this is ensuring safety of robots in complex, dynamic and cluttered environments, co-operating with remote operators. With this data collection method reported here, we aim to automate a conversational Intelligent Assistant (Fred), who acts as an intermediary between the operator and the multiple robotic systems BIBREF6, BIBREF7. Emergency response is clearly a high-stakes situation, which is difficult to emulate in a lab or crowdsourced data collection environment. Therefore, in order to foster engagement and collaboration, the scenario was gamified with a monetary reward given for task success. In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows: The release of a platform for the CRWIZ Intelligent Wizard Interface to allow for the collection of dialogue data for longer complex tasks, by providing a dynamic selection of relevant dialogue acts. A survey of existing datasets and data collection platforms, with a comparison to the CRWIZ data collection for Wizarded crowdsourced data in task-based interactions. Related Work Table TABREF3 gives an overview of prior work and datasets. We report various factors to compare to the CRWIZ dataset corresponding to columns in Table TABREF3: whether or not the person was aware they were talking to a bot; whether each dialogue had a single or multiple participants per role; whether the data collection was crowdsourced; and the modality of the interaction and the domain. As we see from the bottom row, none of the datasets reported in the table meet all the criteria we are aiming for, exemplifying the need for a new and novel approach. Collecting large amounts of dialogue data can be very challenging as two interlocutors are required to create a conversation. If one of the partners in the conversation is a machine as in BIBREF0, the challenge becomes slightly easier since only one partner is lacking. However, in most cases these datasets are aimed at creating resources to train the conversational system itself. Self-authoring the dialogues BIBREF16 or artificially creating data BIBREF1 could be a solution to rapidly collect data, but this solution has been shown to produce low quality unnatural data BIBREF17. One way to mitigate the necessity of pairing two users simultaneously is to allow several participants to contribute to the dialogue, one turn at the time. This approach has been used both in task-oriented BIBREF10, BIBREF2, BIBREF9 and chitchat BIBREF17. This means that the same dialogue can be authored by several participants. However, this raises issues in terms of coherence and forward-planning. These can be addressed by carefully designing the data collection to provide the maximum amount of information to the participants (e.g. providing the task, personality traits of the bot, goals, etc.) but then this adds to cognitive load, time, cost and participant fatigue. Pairing is a valid option, which has been used in a number of recent data collections in various domains, such as navigating in a city BIBREF13, playing a negotiation game BIBREF14, talking about a person BIBREF18, playing an image game BIBREF8 or having a chat about a particular image that is shown to both participants BIBREF21, BIBREF22. Pairing frameworks exist such as Slurk BIBREF23. Besides its pairing management feature, Slurk is designed in order to allow researchers to modify it and implement their own data collection rapidly. The scenarios for the above-mentioned data collections are mostly intuitive tasks that humans do quite regularly, unlike our use-case scenario of emergency response. Role playing is one option. For example, recent work has tried to create datasets for non-collaborative scenarios BIBREF24, BIBREF25, requesting participants to incarnate a particular role during the data collection. This is particularly challenging when the recruitment is done via a crowdsourcing platform. In BIBREF25, the motivation for the workers to play the role is intrinsic to the scenario. In this data collection, one of the participants tries to persuade their partner to contribute to a charity with a certain amount of money. As a result of their dialogue, the money that the persuadee committed to donate was actually donated to a charity organising. However, for scenarios such as ours, the role playing requires a certain expertise and it is questionable whether the desired behaviour would be achieved simply by letting two non-experts converse with free text. Therefore, in recent data collections, there have been a number of attempts to control the data quality in order to produce a desired behaviour. For example, in BIBREF15, the data collection was done with a limited number of subjects who performed the task several days in a row, behaving both as the Wizard and the customer of a travel agency. The same idea was followed in BIBREF12, where a number of participants took part in the data collection over a period of 6 months and, in BIBREF3, BIBREF19 where a limited number of subjects were trained to be the Wizard. This quality control, however, naturally comes with the cost of recruiting and paying these subjects accordingly. The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages: A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge. Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios. System Overview The CRWIZ Intelligent Wizard Interface resides on Slurk BIBREF23, an interaction server built for conducting dialogue experiments and data collections. Slurk handles the pairing of participants and provides a basic chat layout amongst other features. Refer to BIBREF23 for more information on the pairing of participants and the original chat layout. Our chat layout remains similar to Slurk with an important difference. In our scenario, we assign each new participant a role (Operator or Wizard) and, depending on this role, the participant sees different game instructions and chat layout schemes. These are illustrated in Figures FIGREF8 and FIGREF11, for the Operator and Wizard respectively. The main components are described in turn below: 1) The Intelligent Wizard Interface; 2) dialogue structure; and 3) system-changing actions. Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection. Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions. The CRWIZ framework is domain-agnostic, but the data collected with it corresponds to the emergency response domain. System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions: Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions. Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection. Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface. The advantage of the CRWIZ framework is that it can easily be adapted to different domains and procedures by simply modifying the dialogue states loaded at initialisation. These files are in YAML format and have a simple structure that defines their NLG templates (the FSM will pick one template at random if there is more than one) and the states that it can transition to. Note, that some further modifications may be necessary if the scenario is a slot-filling dialogue requiring specific information at various stages. Once the dialogue between the participants finishes, they receive a code in the chat, which can then be submitted to the crowdsourcing platform for payment. The CRWIZ framework generates a JSON file in its log folder with all the information regarding the dialogue, including messages sent, FSM transitions, world state at each action, etc. Automatic evaluation metrics and annotations are also appended such as number of turns per participant, time taken or if one of the participants disconnected. Paying the crowdworkers can be done by just checking that there is a dialogue file with the token that they entered. Data Collection We set up a crowdsourced data collection through Amazon Mechanical Turk, in which two participants chatted with each other in a setting involving an emergency at an offshore facility. As mentioned above, participants had different roles during the interaction: one of them was an Operator of the offshore facility whereas the other one acted as an Intelligent Emergency Assistant. Both of them had the same goal of resolving the emergency and avoiding evacuation at all costs, but they had different functions in the task: The Operator was responsible for the facility and had to give instructions to the Emergency Assistant to perform certain actions, such as deploying emergency robots. Participants in the role of Operator were able to chat freely with no restrictions and were additionally given a map of the facility and a list of available robots (see Figure FIGREF8). The Emergency Assistant had to help the Operator handle the emergency by providing guidance and executing actions. Participants in the role of Emergency Assistant had predefined messages depending on the task progress. They had to choose between one of the options available, depending on which made sense at the time, but they also had the option to write their own message if necessary. The Emergency Assistant role mimics that of the Wizard in a Wizard-of-Oz experiment (see Figure FIGREF11). The participants had a limited time of 6 minutes to resolve the emergency, which consisted of the following sub-tasks: 1) identify and locate the emergency; 2) resolve the emergency; and 3) assess the damage caused. They had four robots available to use with different capabilities: two ground robots with wheels (Husky) and two Quadcopter UAVs (Unmanned Aerial Vehicles). For images of these robots, see Figure FIGREF8. Some robots could inspect areas whereas others were capable of activating hoses, sprinklers or opening valves. Both participants, regardless of their role, had a list with the robots available and their capabilities, but only the Emergency Assistant could control them. This control was through high-level actions (e.g. moving a robot to an area, or ordering the robot to inspect it) that the Emergency Assistant had available as buttons in their interface, as shown in Figure FIGREF11. For safety reasons that might occur in the real world, only one robot could be active doing an action at any time. The combinations of robots and capabilities meant that there was not a robot that could do all three steps of the task mentioned earlier (inspect, resolve and assess damage), but the robots could be used in any order allowing for a variety of ways to resolve the emergency. Participants would progress through the task when certain events were triggered by the Emergency Assistant. For instance, inspecting the area affected by an alarm would trigger the detection of the emergency. After locating the emergency, other dialogue options and commands would open up for the Emergency Assistant. In order to give importance to the milestones in the dialogue, these events were also signalled by GIFs (short animated video snippets) in the chat that both participants could see (e.g. a robot finding a fire), as in Figure FIGREF12. The GIFs were added for several reasons: to increase participant engagement and situation awareness, to aid in the game and to show progress visually. Note that there was no visual stimuli in the original WoZ study BIBREF4 but they were deemed necessary here to help the remote participants contextualise the scenario. These GIFs were produced using a Digital Twin simulation of the offshore facility with the various types of robots. See BIBREF26 for details on the Digital Twin. Data Collection ::: Implementation The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely. The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available. The Emergency Assistant interface contains a button to get a hint if they get stuck at any point of the conversation. This hint mechanism, when activated, highlights one of the possible dialogue options or robot buttons. This highlighted transition was based on the observed probability distribution of transitions from BIBREF4 to encourage more collaborative interaction than a single straight answer. As in the real world, robot actions during the task were simulated to take a certain period of time, depending on the robot executing it and the action. The Emergency Assistant had the option to give status updates and progress reports during this period. Several dialogue options were available for the Emergency Assistant whilst waiting. The time that robots would take to perform actions was based on simulations run on a Digital Twin of the offshore facility implemented in Gazebo BIBREF26. Specifically, we pre-simulated typical robot actions, with the robot's progress and position reflected in the Wizard interface with up-to-date dialogue options for the Emergency Assistant. Once the robot signals the end of their action, additional updated dialogue options and actions are available for the Emergency Assistant. This simulation allowed us to collect dialogues with a realistic embedded world state. Data Collection ::: Deployment We used Amazon Mechanical Turk (AMT) for the data collection. We framed the task as a game to encourage engagement and interaction. The whole task, (a Human Intelligence Task (HIT) in AMT) consisted of the following: Reading an initial brief set of instructions for the overall task. Waiting for a partner for a few seconds before being able to start the dialogue. When a partner was found, they were shown the instructions for their assigned role. As these were different, we ensured that they both took around the same time. The instructions had both a text component and a video explaining how to play, select dialogues, robots, etc. Playing the game to resolve the emergency. This part was limited to 6 minutes. Filling a post-task questionnaire about partner collaboration and task ease. The participants received a game token after finishing the game that would allow them to complete the questionnaire and submit the task. This token helped us link their dialogue to the responses from the questionnaire. Several initial pilots helped to define the total time required as 10 minutes for all the steps above. We set the HIT in AMT to last 20 minutes to allow additional time should any issues arise. The pilots also helped setting the payment for the workers. Initially, participants were paid a flat amount of $1.4 per dialogue. However, we found that offering a tiered payment tied to the length of the dialogue and bonus for completing the task was the most successful and cost-effective method to foster engagement and conversation: $0.5 as base for attempting the HIT, reading the instructions and completing the questionnaire. $0.15 per minute during the game, for a maximum of $0.9 for the 6 minutes. $0.2 additional bonus if the participants were able to successfully avoid the evacuation of the offshore facility. The pay per worker was therefore $1.4 for completing a whole dialogue and $1.6 for those who resolved the emergency for a 10-minute HIT. This pay is above the Federal minimum wage in the US ($7.25/hr or $0.12/min) at the time of the experiment. The post-task questionnaire had four questions rated in 7-point rating scales that are loosely based on the PARADISE BIBREF27 questions for spoken dialogue systems: Partner collaboration: “How helpful was your partner?” on a scale of 1 (not helpful at all) to 7 (very helpful). Information ease: “In this conversation, was it easy to get the information that I needed?” on a scale of 1 (no, not at all) to 7 (yes, completely). Task ease: “How easy was the task?” on a scale of 1 (very easy) to 7 (very difficult). User expertise: “In this conversation, did you know what you could say or do at each point of the dialog?” on a scale of 1 (no, not at all) to 7 (yes, completely). At the end, there was also an optional entry to give free text feedback about the task and/or their partner. Data Analysis For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4. Data Analysis ::: Subjective Data Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest. Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting. Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“. Data Analysis ::: Single vs Multiple Wizards In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment. Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts. The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate. Data Analysis ::: Limitations It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use. Data Analysis ::: Future Work In future work, we want to expand and improve the platform. Dialogue system development can greatly benefit from better ways of obtaining data for rich task-oriented domains such as ours. Part of fully exploiting the potential of crowdsourcing services lies in having readily available tools that help in the generation and gathering of data. One such tool would be a method to take a set of rules, procedures or business processes and automatically convert to a FSM, in a similar way to BIBREF28, ready to be uploaded to the Wizard interface. Regarding quality and coherence, dialogues are particularly challenging to automatically rate. In our data collection, there was not a correct or wrong dialogue option for the messages that the Emergency Assistant sent during the conversation, but some were better than others depending on the context with the Operator. This context is not easily measurable for complex tasks that depend on a dynamic world state. Therefore, we leave to future work automatically measuring dialogue quality through the use of context. The introduction of Instructional Manipulation Checks BIBREF29 before the game to filter out inattentive participants could improve the quality of the data (Crowdworkers are known for performing multiple tasks at once). Goodman2013 also recommend including screening questions that check both attention and language comprehension for AMT participants. Here, there is a balance that needs to be investigated between experience and quality of crowdworkers and the need for large numbers of participants in order to be quickly paired. We are currently exploring using the data collected to train dialogue models for the emergency response domain using Hybrid Code Networks BIBREF30. Conclusion In conclusion, this paper described a new, freely available tool to collect crowdsourced dialogues in rich task-oriented settings. By exploiting the advantages of both the Wizard-of-Oz technique and crowdsourcing services, we can effortlessly obtain dialogues for complex scenarios. The predefined dialogue options available to the Wizard intuitively guide the conversation and allow the domain to be deeply explored without the need for expert training. These predefined options also reinforce the feeling of a true Wizard-of-Oz experiment, where the participant who is not the Wizard thinks that they are interacting with a non-human agent. As the applications for task-based dialogue systems keep growing, we will see the need for systematic ways of generating dialogue corpora in varied, richer scenarios. This platform aims to be the first step towards the simplification of crowdsourcing data collections for task-oriented collaborative dialogues where the participants are working towards a shared common goal. The code for the platform and the data are also released with this publication. Acknowledgements This work was supported by the EPSRC funded ORCA Hub (EP/R026173/1, 2017-2021). Chiyah Garcia's PhD is funded under the EPSRC iCase EP/T517471/1 with Siemens.
pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction
9c94ff8c99d3e51c256f2db78c34b2361f26b9c2
9c94ff8c99d3e51c256f2db78c34b2361f26b9c2_0
Q: What is meant by semiguided dialogue, what part of dialogue is guided? Text: Introduction Recent machine learning breakthroughs in dialogue systems and their respective components have been made possible by training on publicly available large scale datasets, such as ConvAI BIBREF0, bAbI BIBREF1 and MultiWoZ BIBREF2, many of which are collected on crowdsourcing services, such as Amazon Mechanical Turk and Figure-eight. These data collection methods have the benefits of being cost-effective, time-efficient to collect and scalable, enabling the collection of large numbers of dialogues. Where this crowdsourcing method has its limitations is when specific domain expert knowledge is required, rather than general conversation. These tasks include, for example, call centre agents BIBREF3 or clerks with access to a database, as is required for tourism information and booking BIBREF2. In the near future, there will be a demand to extend this to workplace-specific tasks and procedures. Therefore, a method of gathering crowdsourced dialogue data is needed that ensures compliance with such procedures, whilst providing coverage of a wide variety of dialogue phenomena that could be observed in deployment of a trained dialogue system. Wizard-of-Oz data collections in the past have provided such a mechanism. However, these have traditionally not been scalable because of the scarcity of Wizard experts or the expense to train up workers. This was the situation with an initial study reported in BIBREF4, which was conducted in a traditional lab setting and where the Wizard (an academic researcher) had to learn, through training and reading manuals, how best to perform operations in our domain of emergency response. We present the CRWIZ Intelligent Wizard Interface that enables a crowdsourced Wizard to make intelligent, relevant choices without such intensive training by providing a restricted list of valid and relevant dialogue task actions, which changes dynamically based on the context, as the interaction evolves. Prior crowdsourced wizarded data collections have divided the dialogue up into turns and each worker's job consists of one turn utterance generation given a static dialogue context, as in the MultiWoZ dataset BIBREF2. However, this can limit naturalness of the dialogues by restricting forward planning, collaboration and use of memory that humans use for complex multi-stage tasks in a shared dynamic environment/context. Our scenario is such a complex task. Specifically, our scenario relates to using robotics and autonomous systems on an offshore energy platform to resolve an emergency and is part of the EPSRC ORCA Hub project BIBREF5. The ORCA Hub vision is to use teams of robots and autonomous intelligent systems to work on offshore energy platforms to enable cheaper, safer and more efficient working practices. An important part of this is ensuring safety of robots in complex, dynamic and cluttered environments, co-operating with remote operators. With this data collection method reported here, we aim to automate a conversational Intelligent Assistant (Fred), who acts as an intermediary between the operator and the multiple robotic systems BIBREF6, BIBREF7. Emergency response is clearly a high-stakes situation, which is difficult to emulate in a lab or crowdsourced data collection environment. Therefore, in order to foster engagement and collaboration, the scenario was gamified with a monetary reward given for task success. In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows: The release of a platform for the CRWIZ Intelligent Wizard Interface to allow for the collection of dialogue data for longer complex tasks, by providing a dynamic selection of relevant dialogue acts. A survey of existing datasets and data collection platforms, with a comparison to the CRWIZ data collection for Wizarded crowdsourced data in task-based interactions. Related Work Table TABREF3 gives an overview of prior work and datasets. We report various factors to compare to the CRWIZ dataset corresponding to columns in Table TABREF3: whether or not the person was aware they were talking to a bot; whether each dialogue had a single or multiple participants per role; whether the data collection was crowdsourced; and the modality of the interaction and the domain. As we see from the bottom row, none of the datasets reported in the table meet all the criteria we are aiming for, exemplifying the need for a new and novel approach. Collecting large amounts of dialogue data can be very challenging as two interlocutors are required to create a conversation. If one of the partners in the conversation is a machine as in BIBREF0, the challenge becomes slightly easier since only one partner is lacking. However, in most cases these datasets are aimed at creating resources to train the conversational system itself. Self-authoring the dialogues BIBREF16 or artificially creating data BIBREF1 could be a solution to rapidly collect data, but this solution has been shown to produce low quality unnatural data BIBREF17. One way to mitigate the necessity of pairing two users simultaneously is to allow several participants to contribute to the dialogue, one turn at the time. This approach has been used both in task-oriented BIBREF10, BIBREF2, BIBREF9 and chitchat BIBREF17. This means that the same dialogue can be authored by several participants. However, this raises issues in terms of coherence and forward-planning. These can be addressed by carefully designing the data collection to provide the maximum amount of information to the participants (e.g. providing the task, personality traits of the bot, goals, etc.) but then this adds to cognitive load, time, cost and participant fatigue. Pairing is a valid option, which has been used in a number of recent data collections in various domains, such as navigating in a city BIBREF13, playing a negotiation game BIBREF14, talking about a person BIBREF18, playing an image game BIBREF8 or having a chat about a particular image that is shown to both participants BIBREF21, BIBREF22. Pairing frameworks exist such as Slurk BIBREF23. Besides its pairing management feature, Slurk is designed in order to allow researchers to modify it and implement their own data collection rapidly. The scenarios for the above-mentioned data collections are mostly intuitive tasks that humans do quite regularly, unlike our use-case scenario of emergency response. Role playing is one option. For example, recent work has tried to create datasets for non-collaborative scenarios BIBREF24, BIBREF25, requesting participants to incarnate a particular role during the data collection. This is particularly challenging when the recruitment is done via a crowdsourcing platform. In BIBREF25, the motivation for the workers to play the role is intrinsic to the scenario. In this data collection, one of the participants tries to persuade their partner to contribute to a charity with a certain amount of money. As a result of their dialogue, the money that the persuadee committed to donate was actually donated to a charity organising. However, for scenarios such as ours, the role playing requires a certain expertise and it is questionable whether the desired behaviour would be achieved simply by letting two non-experts converse with free text. Therefore, in recent data collections, there have been a number of attempts to control the data quality in order to produce a desired behaviour. For example, in BIBREF15, the data collection was done with a limited number of subjects who performed the task several days in a row, behaving both as the Wizard and the customer of a travel agency. The same idea was followed in BIBREF12, where a number of participants took part in the data collection over a period of 6 months and, in BIBREF3, BIBREF19 where a limited number of subjects were trained to be the Wizard. This quality control, however, naturally comes with the cost of recruiting and paying these subjects accordingly. The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages: A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge. Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios. System Overview The CRWIZ Intelligent Wizard Interface resides on Slurk BIBREF23, an interaction server built for conducting dialogue experiments and data collections. Slurk handles the pairing of participants and provides a basic chat layout amongst other features. Refer to BIBREF23 for more information on the pairing of participants and the original chat layout. Our chat layout remains similar to Slurk with an important difference. In our scenario, we assign each new participant a role (Operator or Wizard) and, depending on this role, the participant sees different game instructions and chat layout schemes. These are illustrated in Figures FIGREF8 and FIGREF11, for the Operator and Wizard respectively. The main components are described in turn below: 1) The Intelligent Wizard Interface; 2) dialogue structure; and 3) system-changing actions. Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection. Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions. The CRWIZ framework is domain-agnostic, but the data collected with it corresponds to the emergency response domain. System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions: Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions. Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection. Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface. The advantage of the CRWIZ framework is that it can easily be adapted to different domains and procedures by simply modifying the dialogue states loaded at initialisation. These files are in YAML format and have a simple structure that defines their NLG templates (the FSM will pick one template at random if there is more than one) and the states that it can transition to. Note, that some further modifications may be necessary if the scenario is a slot-filling dialogue requiring specific information at various stages. Once the dialogue between the participants finishes, they receive a code in the chat, which can then be submitted to the crowdsourcing platform for payment. The CRWIZ framework generates a JSON file in its log folder with all the information regarding the dialogue, including messages sent, FSM transitions, world state at each action, etc. Automatic evaluation metrics and annotations are also appended such as number of turns per participant, time taken or if one of the participants disconnected. Paying the crowdworkers can be done by just checking that there is a dialogue file with the token that they entered. Data Collection We set up a crowdsourced data collection through Amazon Mechanical Turk, in which two participants chatted with each other in a setting involving an emergency at an offshore facility. As mentioned above, participants had different roles during the interaction: one of them was an Operator of the offshore facility whereas the other one acted as an Intelligent Emergency Assistant. Both of them had the same goal of resolving the emergency and avoiding evacuation at all costs, but they had different functions in the task: The Operator was responsible for the facility and had to give instructions to the Emergency Assistant to perform certain actions, such as deploying emergency robots. Participants in the role of Operator were able to chat freely with no restrictions and were additionally given a map of the facility and a list of available robots (see Figure FIGREF8). The Emergency Assistant had to help the Operator handle the emergency by providing guidance and executing actions. Participants in the role of Emergency Assistant had predefined messages depending on the task progress. They had to choose between one of the options available, depending on which made sense at the time, but they also had the option to write their own message if necessary. The Emergency Assistant role mimics that of the Wizard in a Wizard-of-Oz experiment (see Figure FIGREF11). The participants had a limited time of 6 minutes to resolve the emergency, which consisted of the following sub-tasks: 1) identify and locate the emergency; 2) resolve the emergency; and 3) assess the damage caused. They had four robots available to use with different capabilities: two ground robots with wheels (Husky) and two Quadcopter UAVs (Unmanned Aerial Vehicles). For images of these robots, see Figure FIGREF8. Some robots could inspect areas whereas others were capable of activating hoses, sprinklers or opening valves. Both participants, regardless of their role, had a list with the robots available and their capabilities, but only the Emergency Assistant could control them. This control was through high-level actions (e.g. moving a robot to an area, or ordering the robot to inspect it) that the Emergency Assistant had available as buttons in their interface, as shown in Figure FIGREF11. For safety reasons that might occur in the real world, only one robot could be active doing an action at any time. The combinations of robots and capabilities meant that there was not a robot that could do all three steps of the task mentioned earlier (inspect, resolve and assess damage), but the robots could be used in any order allowing for a variety of ways to resolve the emergency. Participants would progress through the task when certain events were triggered by the Emergency Assistant. For instance, inspecting the area affected by an alarm would trigger the detection of the emergency. After locating the emergency, other dialogue options and commands would open up for the Emergency Assistant. In order to give importance to the milestones in the dialogue, these events were also signalled by GIFs (short animated video snippets) in the chat that both participants could see (e.g. a robot finding a fire), as in Figure FIGREF12. The GIFs were added for several reasons: to increase participant engagement and situation awareness, to aid in the game and to show progress visually. Note that there was no visual stimuli in the original WoZ study BIBREF4 but they were deemed necessary here to help the remote participants contextualise the scenario. These GIFs were produced using a Digital Twin simulation of the offshore facility with the various types of robots. See BIBREF26 for details on the Digital Twin. Data Collection ::: Implementation The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely. The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available. The Emergency Assistant interface contains a button to get a hint if they get stuck at any point of the conversation. This hint mechanism, when activated, highlights one of the possible dialogue options or robot buttons. This highlighted transition was based on the observed probability distribution of transitions from BIBREF4 to encourage more collaborative interaction than a single straight answer. As in the real world, robot actions during the task were simulated to take a certain period of time, depending on the robot executing it and the action. The Emergency Assistant had the option to give status updates and progress reports during this period. Several dialogue options were available for the Emergency Assistant whilst waiting. The time that robots would take to perform actions was based on simulations run on a Digital Twin of the offshore facility implemented in Gazebo BIBREF26. Specifically, we pre-simulated typical robot actions, with the robot's progress and position reflected in the Wizard interface with up-to-date dialogue options for the Emergency Assistant. Once the robot signals the end of their action, additional updated dialogue options and actions are available for the Emergency Assistant. This simulation allowed us to collect dialogues with a realistic embedded world state. Data Collection ::: Deployment We used Amazon Mechanical Turk (AMT) for the data collection. We framed the task as a game to encourage engagement and interaction. The whole task, (a Human Intelligence Task (HIT) in AMT) consisted of the following: Reading an initial brief set of instructions for the overall task. Waiting for a partner for a few seconds before being able to start the dialogue. When a partner was found, they were shown the instructions for their assigned role. As these were different, we ensured that they both took around the same time. The instructions had both a text component and a video explaining how to play, select dialogues, robots, etc. Playing the game to resolve the emergency. This part was limited to 6 minutes. Filling a post-task questionnaire about partner collaboration and task ease. The participants received a game token after finishing the game that would allow them to complete the questionnaire and submit the task. This token helped us link their dialogue to the responses from the questionnaire. Several initial pilots helped to define the total time required as 10 minutes for all the steps above. We set the HIT in AMT to last 20 minutes to allow additional time should any issues arise. The pilots also helped setting the payment for the workers. Initially, participants were paid a flat amount of $1.4 per dialogue. However, we found that offering a tiered payment tied to the length of the dialogue and bonus for completing the task was the most successful and cost-effective method to foster engagement and conversation: $0.5 as base for attempting the HIT, reading the instructions and completing the questionnaire. $0.15 per minute during the game, for a maximum of $0.9 for the 6 minutes. $0.2 additional bonus if the participants were able to successfully avoid the evacuation of the offshore facility. The pay per worker was therefore $1.4 for completing a whole dialogue and $1.6 for those who resolved the emergency for a 10-minute HIT. This pay is above the Federal minimum wage in the US ($7.25/hr or $0.12/min) at the time of the experiment. The post-task questionnaire had four questions rated in 7-point rating scales that are loosely based on the PARADISE BIBREF27 questions for spoken dialogue systems: Partner collaboration: “How helpful was your partner?” on a scale of 1 (not helpful at all) to 7 (very helpful). Information ease: “In this conversation, was it easy to get the information that I needed?” on a scale of 1 (no, not at all) to 7 (yes, completely). Task ease: “How easy was the task?” on a scale of 1 (very easy) to 7 (very difficult). User expertise: “In this conversation, did you know what you could say or do at each point of the dialog?” on a scale of 1 (no, not at all) to 7 (yes, completely). At the end, there was also an optional entry to give free text feedback about the task and/or their partner. Data Analysis For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4. Data Analysis ::: Subjective Data Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest. Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting. Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“. Data Analysis ::: Single vs Multiple Wizards In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment. Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts. The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate. Data Analysis ::: Limitations It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use. Data Analysis ::: Future Work In future work, we want to expand and improve the platform. Dialogue system development can greatly benefit from better ways of obtaining data for rich task-oriented domains such as ours. Part of fully exploiting the potential of crowdsourcing services lies in having readily available tools that help in the generation and gathering of data. One such tool would be a method to take a set of rules, procedures or business processes and automatically convert to a FSM, in a similar way to BIBREF28, ready to be uploaded to the Wizard interface. Regarding quality and coherence, dialogues are particularly challenging to automatically rate. In our data collection, there was not a correct or wrong dialogue option for the messages that the Emergency Assistant sent during the conversation, but some were better than others depending on the context with the Operator. This context is not easily measurable for complex tasks that depend on a dynamic world state. Therefore, we leave to future work automatically measuring dialogue quality through the use of context. The introduction of Instructional Manipulation Checks BIBREF29 before the game to filter out inattentive participants could improve the quality of the data (Crowdworkers are known for performing multiple tasks at once). Goodman2013 also recommend including screening questions that check both attention and language comprehension for AMT participants. Here, there is a balance that needs to be investigated between experience and quality of crowdworkers and the need for large numbers of participants in order to be quickly paired. We are currently exploring using the data collected to train dialogue models for the emergency response domain using Hybrid Code Networks BIBREF30. Conclusion In conclusion, this paper described a new, freely available tool to collect crowdsourced dialogues in rich task-oriented settings. By exploiting the advantages of both the Wizard-of-Oz technique and crowdsourcing services, we can effortlessly obtain dialogues for complex scenarios. The predefined dialogue options available to the Wizard intuitively guide the conversation and allow the domain to be deeply explored without the need for expert training. These predefined options also reinforce the feeling of a true Wizard-of-Oz experiment, where the participant who is not the Wizard thinks that they are interacting with a non-human agent. As the applications for task-based dialogue systems keep growing, we will see the need for systematic ways of generating dialogue corpora in varied, richer scenarios. This platform aims to be the first step towards the simplification of crowdsourcing data collections for task-oriented collaborative dialogues where the participants are working towards a shared common goal. The code for the platform and the data are also released with this publication. Acknowledgements This work was supported by the EPSRC funded ORCA Hub (EP/R026173/1, 2017-2021). Chiyah Garcia's PhD is funded under the EPSRC iCase EP/T517471/1 with Siemens.
The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard.
8e9de181fa7d96df9686d0eb2a5c43841e6400fa
8e9de181fa7d96df9686d0eb2a5c43841e6400fa_0
Q: Is CRWIZ already used for data collection, what are the results? Text: Introduction Recent machine learning breakthroughs in dialogue systems and their respective components have been made possible by training on publicly available large scale datasets, such as ConvAI BIBREF0, bAbI BIBREF1 and MultiWoZ BIBREF2, many of which are collected on crowdsourcing services, such as Amazon Mechanical Turk and Figure-eight. These data collection methods have the benefits of being cost-effective, time-efficient to collect and scalable, enabling the collection of large numbers of dialogues. Where this crowdsourcing method has its limitations is when specific domain expert knowledge is required, rather than general conversation. These tasks include, for example, call centre agents BIBREF3 or clerks with access to a database, as is required for tourism information and booking BIBREF2. In the near future, there will be a demand to extend this to workplace-specific tasks and procedures. Therefore, a method of gathering crowdsourced dialogue data is needed that ensures compliance with such procedures, whilst providing coverage of a wide variety of dialogue phenomena that could be observed in deployment of a trained dialogue system. Wizard-of-Oz data collections in the past have provided such a mechanism. However, these have traditionally not been scalable because of the scarcity of Wizard experts or the expense to train up workers. This was the situation with an initial study reported in BIBREF4, which was conducted in a traditional lab setting and where the Wizard (an academic researcher) had to learn, through training and reading manuals, how best to perform operations in our domain of emergency response. We present the CRWIZ Intelligent Wizard Interface that enables a crowdsourced Wizard to make intelligent, relevant choices without such intensive training by providing a restricted list of valid and relevant dialogue task actions, which changes dynamically based on the context, as the interaction evolves. Prior crowdsourced wizarded data collections have divided the dialogue up into turns and each worker's job consists of one turn utterance generation given a static dialogue context, as in the MultiWoZ dataset BIBREF2. However, this can limit naturalness of the dialogues by restricting forward planning, collaboration and use of memory that humans use for complex multi-stage tasks in a shared dynamic environment/context. Our scenario is such a complex task. Specifically, our scenario relates to using robotics and autonomous systems on an offshore energy platform to resolve an emergency and is part of the EPSRC ORCA Hub project BIBREF5. The ORCA Hub vision is to use teams of robots and autonomous intelligent systems to work on offshore energy platforms to enable cheaper, safer and more efficient working practices. An important part of this is ensuring safety of robots in complex, dynamic and cluttered environments, co-operating with remote operators. With this data collection method reported here, we aim to automate a conversational Intelligent Assistant (Fred), who acts as an intermediary between the operator and the multiple robotic systems BIBREF6, BIBREF7. Emergency response is clearly a high-stakes situation, which is difficult to emulate in a lab or crowdsourced data collection environment. Therefore, in order to foster engagement and collaboration, the scenario was gamified with a monetary reward given for task success. In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows: The release of a platform for the CRWIZ Intelligent Wizard Interface to allow for the collection of dialogue data for longer complex tasks, by providing a dynamic selection of relevant dialogue acts. A survey of existing datasets and data collection platforms, with a comparison to the CRWIZ data collection for Wizarded crowdsourced data in task-based interactions. Related Work Table TABREF3 gives an overview of prior work and datasets. We report various factors to compare to the CRWIZ dataset corresponding to columns in Table TABREF3: whether or not the person was aware they were talking to a bot; whether each dialogue had a single or multiple participants per role; whether the data collection was crowdsourced; and the modality of the interaction and the domain. As we see from the bottom row, none of the datasets reported in the table meet all the criteria we are aiming for, exemplifying the need for a new and novel approach. Collecting large amounts of dialogue data can be very challenging as two interlocutors are required to create a conversation. If one of the partners in the conversation is a machine as in BIBREF0, the challenge becomes slightly easier since only one partner is lacking. However, in most cases these datasets are aimed at creating resources to train the conversational system itself. Self-authoring the dialogues BIBREF16 or artificially creating data BIBREF1 could be a solution to rapidly collect data, but this solution has been shown to produce low quality unnatural data BIBREF17. One way to mitigate the necessity of pairing two users simultaneously is to allow several participants to contribute to the dialogue, one turn at the time. This approach has been used both in task-oriented BIBREF10, BIBREF2, BIBREF9 and chitchat BIBREF17. This means that the same dialogue can be authored by several participants. However, this raises issues in terms of coherence and forward-planning. These can be addressed by carefully designing the data collection to provide the maximum amount of information to the participants (e.g. providing the task, personality traits of the bot, goals, etc.) but then this adds to cognitive load, time, cost and participant fatigue. Pairing is a valid option, which has been used in a number of recent data collections in various domains, such as navigating in a city BIBREF13, playing a negotiation game BIBREF14, talking about a person BIBREF18, playing an image game BIBREF8 or having a chat about a particular image that is shown to both participants BIBREF21, BIBREF22. Pairing frameworks exist such as Slurk BIBREF23. Besides its pairing management feature, Slurk is designed in order to allow researchers to modify it and implement their own data collection rapidly. The scenarios for the above-mentioned data collections are mostly intuitive tasks that humans do quite regularly, unlike our use-case scenario of emergency response. Role playing is one option. For example, recent work has tried to create datasets for non-collaborative scenarios BIBREF24, BIBREF25, requesting participants to incarnate a particular role during the data collection. This is particularly challenging when the recruitment is done via a crowdsourcing platform. In BIBREF25, the motivation for the workers to play the role is intrinsic to the scenario. In this data collection, one of the participants tries to persuade their partner to contribute to a charity with a certain amount of money. As a result of their dialogue, the money that the persuadee committed to donate was actually donated to a charity organising. However, for scenarios such as ours, the role playing requires a certain expertise and it is questionable whether the desired behaviour would be achieved simply by letting two non-experts converse with free text. Therefore, in recent data collections, there have been a number of attempts to control the data quality in order to produce a desired behaviour. For example, in BIBREF15, the data collection was done with a limited number of subjects who performed the task several days in a row, behaving both as the Wizard and the customer of a travel agency. The same idea was followed in BIBREF12, where a number of participants took part in the data collection over a period of 6 months and, in BIBREF3, BIBREF19 where a limited number of subjects were trained to be the Wizard. This quality control, however, naturally comes with the cost of recruiting and paying these subjects accordingly. The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages: A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge. Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios. System Overview The CRWIZ Intelligent Wizard Interface resides on Slurk BIBREF23, an interaction server built for conducting dialogue experiments and data collections. Slurk handles the pairing of participants and provides a basic chat layout amongst other features. Refer to BIBREF23 for more information on the pairing of participants and the original chat layout. Our chat layout remains similar to Slurk with an important difference. In our scenario, we assign each new participant a role (Operator or Wizard) and, depending on this role, the participant sees different game instructions and chat layout schemes. These are illustrated in Figures FIGREF8 and FIGREF11, for the Operator and Wizard respectively. The main components are described in turn below: 1) The Intelligent Wizard Interface; 2) dialogue structure; and 3) system-changing actions. Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection. Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions. The CRWIZ framework is domain-agnostic, but the data collected with it corresponds to the emergency response domain. System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions: Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions. Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection. Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface. The advantage of the CRWIZ framework is that it can easily be adapted to different domains and procedures by simply modifying the dialogue states loaded at initialisation. These files are in YAML format and have a simple structure that defines their NLG templates (the FSM will pick one template at random if there is more than one) and the states that it can transition to. Note, that some further modifications may be necessary if the scenario is a slot-filling dialogue requiring specific information at various stages. Once the dialogue between the participants finishes, they receive a code in the chat, which can then be submitted to the crowdsourcing platform for payment. The CRWIZ framework generates a JSON file in its log folder with all the information regarding the dialogue, including messages sent, FSM transitions, world state at each action, etc. Automatic evaluation metrics and annotations are also appended such as number of turns per participant, time taken or if one of the participants disconnected. Paying the crowdworkers can be done by just checking that there is a dialogue file with the token that they entered. Data Collection We set up a crowdsourced data collection through Amazon Mechanical Turk, in which two participants chatted with each other in a setting involving an emergency at an offshore facility. As mentioned above, participants had different roles during the interaction: one of them was an Operator of the offshore facility whereas the other one acted as an Intelligent Emergency Assistant. Both of them had the same goal of resolving the emergency and avoiding evacuation at all costs, but they had different functions in the task: The Operator was responsible for the facility and had to give instructions to the Emergency Assistant to perform certain actions, such as deploying emergency robots. Participants in the role of Operator were able to chat freely with no restrictions and were additionally given a map of the facility and a list of available robots (see Figure FIGREF8). The Emergency Assistant had to help the Operator handle the emergency by providing guidance and executing actions. Participants in the role of Emergency Assistant had predefined messages depending on the task progress. They had to choose between one of the options available, depending on which made sense at the time, but they also had the option to write their own message if necessary. The Emergency Assistant role mimics that of the Wizard in a Wizard-of-Oz experiment (see Figure FIGREF11). The participants had a limited time of 6 minutes to resolve the emergency, which consisted of the following sub-tasks: 1) identify and locate the emergency; 2) resolve the emergency; and 3) assess the damage caused. They had four robots available to use with different capabilities: two ground robots with wheels (Husky) and two Quadcopter UAVs (Unmanned Aerial Vehicles). For images of these robots, see Figure FIGREF8. Some robots could inspect areas whereas others were capable of activating hoses, sprinklers or opening valves. Both participants, regardless of their role, had a list with the robots available and their capabilities, but only the Emergency Assistant could control them. This control was through high-level actions (e.g. moving a robot to an area, or ordering the robot to inspect it) that the Emergency Assistant had available as buttons in their interface, as shown in Figure FIGREF11. For safety reasons that might occur in the real world, only one robot could be active doing an action at any time. The combinations of robots and capabilities meant that there was not a robot that could do all three steps of the task mentioned earlier (inspect, resolve and assess damage), but the robots could be used in any order allowing for a variety of ways to resolve the emergency. Participants would progress through the task when certain events were triggered by the Emergency Assistant. For instance, inspecting the area affected by an alarm would trigger the detection of the emergency. After locating the emergency, other dialogue options and commands would open up for the Emergency Assistant. In order to give importance to the milestones in the dialogue, these events were also signalled by GIFs (short animated video snippets) in the chat that both participants could see (e.g. a robot finding a fire), as in Figure FIGREF12. The GIFs were added for several reasons: to increase participant engagement and situation awareness, to aid in the game and to show progress visually. Note that there was no visual stimuli in the original WoZ study BIBREF4 but they were deemed necessary here to help the remote participants contextualise the scenario. These GIFs were produced using a Digital Twin simulation of the offshore facility with the various types of robots. See BIBREF26 for details on the Digital Twin. Data Collection ::: Implementation The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely. The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available. The Emergency Assistant interface contains a button to get a hint if they get stuck at any point of the conversation. This hint mechanism, when activated, highlights one of the possible dialogue options or robot buttons. This highlighted transition was based on the observed probability distribution of transitions from BIBREF4 to encourage more collaborative interaction than a single straight answer. As in the real world, robot actions during the task were simulated to take a certain period of time, depending on the robot executing it and the action. The Emergency Assistant had the option to give status updates and progress reports during this period. Several dialogue options were available for the Emergency Assistant whilst waiting. The time that robots would take to perform actions was based on simulations run on a Digital Twin of the offshore facility implemented in Gazebo BIBREF26. Specifically, we pre-simulated typical robot actions, with the robot's progress and position reflected in the Wizard interface with up-to-date dialogue options for the Emergency Assistant. Once the robot signals the end of their action, additional updated dialogue options and actions are available for the Emergency Assistant. This simulation allowed us to collect dialogues with a realistic embedded world state. Data Collection ::: Deployment We used Amazon Mechanical Turk (AMT) for the data collection. We framed the task as a game to encourage engagement and interaction. The whole task, (a Human Intelligence Task (HIT) in AMT) consisted of the following: Reading an initial brief set of instructions for the overall task. Waiting for a partner for a few seconds before being able to start the dialogue. When a partner was found, they were shown the instructions for their assigned role. As these were different, we ensured that they both took around the same time. The instructions had both a text component and a video explaining how to play, select dialogues, robots, etc. Playing the game to resolve the emergency. This part was limited to 6 minutes. Filling a post-task questionnaire about partner collaboration and task ease. The participants received a game token after finishing the game that would allow them to complete the questionnaire and submit the task. This token helped us link their dialogue to the responses from the questionnaire. Several initial pilots helped to define the total time required as 10 minutes for all the steps above. We set the HIT in AMT to last 20 minutes to allow additional time should any issues arise. The pilots also helped setting the payment for the workers. Initially, participants were paid a flat amount of $1.4 per dialogue. However, we found that offering a tiered payment tied to the length of the dialogue and bonus for completing the task was the most successful and cost-effective method to foster engagement and conversation: $0.5 as base for attempting the HIT, reading the instructions and completing the questionnaire. $0.15 per minute during the game, for a maximum of $0.9 for the 6 minutes. $0.2 additional bonus if the participants were able to successfully avoid the evacuation of the offshore facility. The pay per worker was therefore $1.4 for completing a whole dialogue and $1.6 for those who resolved the emergency for a 10-minute HIT. This pay is above the Federal minimum wage in the US ($7.25/hr or $0.12/min) at the time of the experiment. The post-task questionnaire had four questions rated in 7-point rating scales that are loosely based on the PARADISE BIBREF27 questions for spoken dialogue systems: Partner collaboration: “How helpful was your partner?” on a scale of 1 (not helpful at all) to 7 (very helpful). Information ease: “In this conversation, was it easy to get the information that I needed?” on a scale of 1 (no, not at all) to 7 (yes, completely). Task ease: “How easy was the task?” on a scale of 1 (very easy) to 7 (very difficult). User expertise: “In this conversation, did you know what you could say or do at each point of the dialog?” on a scale of 1 (no, not at all) to 7 (yes, completely). At the end, there was also an optional entry to give free text feedback about the task and/or their partner. Data Analysis For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4. Data Analysis ::: Subjective Data Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest. Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting. Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“. Data Analysis ::: Single vs Multiple Wizards In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment. Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts. The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate. Data Analysis ::: Limitations It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use. Data Analysis ::: Future Work In future work, we want to expand and improve the platform. Dialogue system development can greatly benefit from better ways of obtaining data for rich task-oriented domains such as ours. Part of fully exploiting the potential of crowdsourcing services lies in having readily available tools that help in the generation and gathering of data. One such tool would be a method to take a set of rules, procedures or business processes and automatically convert to a FSM, in a similar way to BIBREF28, ready to be uploaded to the Wizard interface. Regarding quality and coherence, dialogues are particularly challenging to automatically rate. In our data collection, there was not a correct or wrong dialogue option for the messages that the Emergency Assistant sent during the conversation, but some were better than others depending on the context with the Operator. This context is not easily measurable for complex tasks that depend on a dynamic world state. Therefore, we leave to future work automatically measuring dialogue quality through the use of context. The introduction of Instructional Manipulation Checks BIBREF29 before the game to filter out inattentive participants could improve the quality of the data (Crowdworkers are known for performing multiple tasks at once). Goodman2013 also recommend including screening questions that check both attention and language comprehension for AMT participants. Here, there is a balance that needs to be investigated between experience and quality of crowdworkers and the need for large numbers of participants in order to be quickly paired. We are currently exploring using the data collected to train dialogue models for the emergency response domain using Hybrid Code Networks BIBREF30. Conclusion In conclusion, this paper described a new, freely available tool to collect crowdsourced dialogues in rich task-oriented settings. By exploiting the advantages of both the Wizard-of-Oz technique and crowdsourcing services, we can effortlessly obtain dialogues for complex scenarios. The predefined dialogue options available to the Wizard intuitively guide the conversation and allow the domain to be deeply explored without the need for expert training. These predefined options also reinforce the feeling of a true Wizard-of-Oz experiment, where the participant who is not the Wizard thinks that they are interacting with a non-human agent. As the applications for task-based dialogue systems keep growing, we will see the need for systematic ways of generating dialogue corpora in varied, richer scenarios. This platform aims to be the first step towards the simplification of crowdsourcing data collections for task-oriented collaborative dialogues where the participants are working towards a shared common goal. The code for the platform and the data are also released with this publication. Acknowledgements This work was supported by the EPSRC funded ORCA Hub (EP/R026173/1, 2017-2021). Chiyah Garcia's PhD is funded under the EPSRC iCase EP/T517471/1 with Siemens.
Yes, CRWIZ has been used for data collection and its initial use resulted in 145 dialogues. The average time taken for the task was close to the estimate of 10 minutes, 14 dialogues (9.66%) resolved the emergency in the scenario, and these dialogues rated consistently higher in subjective and objective ratings than those which did not resolve the emergency. Qualitative results showed that participants believed that they were interacting with an automated assistant.
ff1595a388769c6429423a75b6e1734ef88d3e46
ff1595a388769c6429423a75b6e1734ef88d3e46_0
Q: How does framework made sure that dialogue will not breach procedures? Text: Introduction Recent machine learning breakthroughs in dialogue systems and their respective components have been made possible by training on publicly available large scale datasets, such as ConvAI BIBREF0, bAbI BIBREF1 and MultiWoZ BIBREF2, many of which are collected on crowdsourcing services, such as Amazon Mechanical Turk and Figure-eight. These data collection methods have the benefits of being cost-effective, time-efficient to collect and scalable, enabling the collection of large numbers of dialogues. Where this crowdsourcing method has its limitations is when specific domain expert knowledge is required, rather than general conversation. These tasks include, for example, call centre agents BIBREF3 or clerks with access to a database, as is required for tourism information and booking BIBREF2. In the near future, there will be a demand to extend this to workplace-specific tasks and procedures. Therefore, a method of gathering crowdsourced dialogue data is needed that ensures compliance with such procedures, whilst providing coverage of a wide variety of dialogue phenomena that could be observed in deployment of a trained dialogue system. Wizard-of-Oz data collections in the past have provided such a mechanism. However, these have traditionally not been scalable because of the scarcity of Wizard experts or the expense to train up workers. This was the situation with an initial study reported in BIBREF4, which was conducted in a traditional lab setting and where the Wizard (an academic researcher) had to learn, through training and reading manuals, how best to perform operations in our domain of emergency response. We present the CRWIZ Intelligent Wizard Interface that enables a crowdsourced Wizard to make intelligent, relevant choices without such intensive training by providing a restricted list of valid and relevant dialogue task actions, which changes dynamically based on the context, as the interaction evolves. Prior crowdsourced wizarded data collections have divided the dialogue up into turns and each worker's job consists of one turn utterance generation given a static dialogue context, as in the MultiWoZ dataset BIBREF2. However, this can limit naturalness of the dialogues by restricting forward planning, collaboration and use of memory that humans use for complex multi-stage tasks in a shared dynamic environment/context. Our scenario is such a complex task. Specifically, our scenario relates to using robotics and autonomous systems on an offshore energy platform to resolve an emergency and is part of the EPSRC ORCA Hub project BIBREF5. The ORCA Hub vision is to use teams of robots and autonomous intelligent systems to work on offshore energy platforms to enable cheaper, safer and more efficient working practices. An important part of this is ensuring safety of robots in complex, dynamic and cluttered environments, co-operating with remote operators. With this data collection method reported here, we aim to automate a conversational Intelligent Assistant (Fred), who acts as an intermediary between the operator and the multiple robotic systems BIBREF6, BIBREF7. Emergency response is clearly a high-stakes situation, which is difficult to emulate in a lab or crowdsourced data collection environment. Therefore, in order to foster engagement and collaboration, the scenario was gamified with a monetary reward given for task success. In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows: The release of a platform for the CRWIZ Intelligent Wizard Interface to allow for the collection of dialogue data for longer complex tasks, by providing a dynamic selection of relevant dialogue acts. A survey of existing datasets and data collection platforms, with a comparison to the CRWIZ data collection for Wizarded crowdsourced data in task-based interactions. Related Work Table TABREF3 gives an overview of prior work and datasets. We report various factors to compare to the CRWIZ dataset corresponding to columns in Table TABREF3: whether or not the person was aware they were talking to a bot; whether each dialogue had a single or multiple participants per role; whether the data collection was crowdsourced; and the modality of the interaction and the domain. As we see from the bottom row, none of the datasets reported in the table meet all the criteria we are aiming for, exemplifying the need for a new and novel approach. Collecting large amounts of dialogue data can be very challenging as two interlocutors are required to create a conversation. If one of the partners in the conversation is a machine as in BIBREF0, the challenge becomes slightly easier since only one partner is lacking. However, in most cases these datasets are aimed at creating resources to train the conversational system itself. Self-authoring the dialogues BIBREF16 or artificially creating data BIBREF1 could be a solution to rapidly collect data, but this solution has been shown to produce low quality unnatural data BIBREF17. One way to mitigate the necessity of pairing two users simultaneously is to allow several participants to contribute to the dialogue, one turn at the time. This approach has been used both in task-oriented BIBREF10, BIBREF2, BIBREF9 and chitchat BIBREF17. This means that the same dialogue can be authored by several participants. However, this raises issues in terms of coherence and forward-planning. These can be addressed by carefully designing the data collection to provide the maximum amount of information to the participants (e.g. providing the task, personality traits of the bot, goals, etc.) but then this adds to cognitive load, time, cost and participant fatigue. Pairing is a valid option, which has been used in a number of recent data collections in various domains, such as navigating in a city BIBREF13, playing a negotiation game BIBREF14, talking about a person BIBREF18, playing an image game BIBREF8 or having a chat about a particular image that is shown to both participants BIBREF21, BIBREF22. Pairing frameworks exist such as Slurk BIBREF23. Besides its pairing management feature, Slurk is designed in order to allow researchers to modify it and implement their own data collection rapidly. The scenarios for the above-mentioned data collections are mostly intuitive tasks that humans do quite regularly, unlike our use-case scenario of emergency response. Role playing is one option. For example, recent work has tried to create datasets for non-collaborative scenarios BIBREF24, BIBREF25, requesting participants to incarnate a particular role during the data collection. This is particularly challenging when the recruitment is done via a crowdsourcing platform. In BIBREF25, the motivation for the workers to play the role is intrinsic to the scenario. In this data collection, one of the participants tries to persuade their partner to contribute to a charity with a certain amount of money. As a result of their dialogue, the money that the persuadee committed to donate was actually donated to a charity organising. However, for scenarios such as ours, the role playing requires a certain expertise and it is questionable whether the desired behaviour would be achieved simply by letting two non-experts converse with free text. Therefore, in recent data collections, there have been a number of attempts to control the data quality in order to produce a desired behaviour. For example, in BIBREF15, the data collection was done with a limited number of subjects who performed the task several days in a row, behaving both as the Wizard and the customer of a travel agency. The same idea was followed in BIBREF12, where a number of participants took part in the data collection over a period of 6 months and, in BIBREF3, BIBREF19 where a limited number of subjects were trained to be the Wizard. This quality control, however, naturally comes with the cost of recruiting and paying these subjects accordingly. The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages: A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge. Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios. System Overview The CRWIZ Intelligent Wizard Interface resides on Slurk BIBREF23, an interaction server built for conducting dialogue experiments and data collections. Slurk handles the pairing of participants and provides a basic chat layout amongst other features. Refer to BIBREF23 for more information on the pairing of participants and the original chat layout. Our chat layout remains similar to Slurk with an important difference. In our scenario, we assign each new participant a role (Operator or Wizard) and, depending on this role, the participant sees different game instructions and chat layout schemes. These are illustrated in Figures FIGREF8 and FIGREF11, for the Operator and Wizard respectively. The main components are described in turn below: 1) The Intelligent Wizard Interface; 2) dialogue structure; and 3) system-changing actions. Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection. Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions. The CRWIZ framework is domain-agnostic, but the data collected with it corresponds to the emergency response domain. System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions: Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions. Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection. Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface. The advantage of the CRWIZ framework is that it can easily be adapted to different domains and procedures by simply modifying the dialogue states loaded at initialisation. These files are in YAML format and have a simple structure that defines their NLG templates (the FSM will pick one template at random if there is more than one) and the states that it can transition to. Note, that some further modifications may be necessary if the scenario is a slot-filling dialogue requiring specific information at various stages. Once the dialogue between the participants finishes, they receive a code in the chat, which can then be submitted to the crowdsourcing platform for payment. The CRWIZ framework generates a JSON file in its log folder with all the information regarding the dialogue, including messages sent, FSM transitions, world state at each action, etc. Automatic evaluation metrics and annotations are also appended such as number of turns per participant, time taken or if one of the participants disconnected. Paying the crowdworkers can be done by just checking that there is a dialogue file with the token that they entered. Data Collection We set up a crowdsourced data collection through Amazon Mechanical Turk, in which two participants chatted with each other in a setting involving an emergency at an offshore facility. As mentioned above, participants had different roles during the interaction: one of them was an Operator of the offshore facility whereas the other one acted as an Intelligent Emergency Assistant. Both of them had the same goal of resolving the emergency and avoiding evacuation at all costs, but they had different functions in the task: The Operator was responsible for the facility and had to give instructions to the Emergency Assistant to perform certain actions, such as deploying emergency robots. Participants in the role of Operator were able to chat freely with no restrictions and were additionally given a map of the facility and a list of available robots (see Figure FIGREF8). The Emergency Assistant had to help the Operator handle the emergency by providing guidance and executing actions. Participants in the role of Emergency Assistant had predefined messages depending on the task progress. They had to choose between one of the options available, depending on which made sense at the time, but they also had the option to write their own message if necessary. The Emergency Assistant role mimics that of the Wizard in a Wizard-of-Oz experiment (see Figure FIGREF11). The participants had a limited time of 6 minutes to resolve the emergency, which consisted of the following sub-tasks: 1) identify and locate the emergency; 2) resolve the emergency; and 3) assess the damage caused. They had four robots available to use with different capabilities: two ground robots with wheels (Husky) and two Quadcopter UAVs (Unmanned Aerial Vehicles). For images of these robots, see Figure FIGREF8. Some robots could inspect areas whereas others were capable of activating hoses, sprinklers or opening valves. Both participants, regardless of their role, had a list with the robots available and their capabilities, but only the Emergency Assistant could control them. This control was through high-level actions (e.g. moving a robot to an area, or ordering the robot to inspect it) that the Emergency Assistant had available as buttons in their interface, as shown in Figure FIGREF11. For safety reasons that might occur in the real world, only one robot could be active doing an action at any time. The combinations of robots and capabilities meant that there was not a robot that could do all three steps of the task mentioned earlier (inspect, resolve and assess damage), but the robots could be used in any order allowing for a variety of ways to resolve the emergency. Participants would progress through the task when certain events were triggered by the Emergency Assistant. For instance, inspecting the area affected by an alarm would trigger the detection of the emergency. After locating the emergency, other dialogue options and commands would open up for the Emergency Assistant. In order to give importance to the milestones in the dialogue, these events were also signalled by GIFs (short animated video snippets) in the chat that both participants could see (e.g. a robot finding a fire), as in Figure FIGREF12. The GIFs were added for several reasons: to increase participant engagement and situation awareness, to aid in the game and to show progress visually. Note that there was no visual stimuli in the original WoZ study BIBREF4 but they were deemed necessary here to help the remote participants contextualise the scenario. These GIFs were produced using a Digital Twin simulation of the offshore facility with the various types of robots. See BIBREF26 for details on the Digital Twin. Data Collection ::: Implementation The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely. The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available. The Emergency Assistant interface contains a button to get a hint if they get stuck at any point of the conversation. This hint mechanism, when activated, highlights one of the possible dialogue options or robot buttons. This highlighted transition was based on the observed probability distribution of transitions from BIBREF4 to encourage more collaborative interaction than a single straight answer. As in the real world, robot actions during the task were simulated to take a certain period of time, depending on the robot executing it and the action. The Emergency Assistant had the option to give status updates and progress reports during this period. Several dialogue options were available for the Emergency Assistant whilst waiting. The time that robots would take to perform actions was based on simulations run on a Digital Twin of the offshore facility implemented in Gazebo BIBREF26. Specifically, we pre-simulated typical robot actions, with the robot's progress and position reflected in the Wizard interface with up-to-date dialogue options for the Emergency Assistant. Once the robot signals the end of their action, additional updated dialogue options and actions are available for the Emergency Assistant. This simulation allowed us to collect dialogues with a realistic embedded world state. Data Collection ::: Deployment We used Amazon Mechanical Turk (AMT) for the data collection. We framed the task as a game to encourage engagement and interaction. The whole task, (a Human Intelligence Task (HIT) in AMT) consisted of the following: Reading an initial brief set of instructions for the overall task. Waiting for a partner for a few seconds before being able to start the dialogue. When a partner was found, they were shown the instructions for their assigned role. As these were different, we ensured that they both took around the same time. The instructions had both a text component and a video explaining how to play, select dialogues, robots, etc. Playing the game to resolve the emergency. This part was limited to 6 minutes. Filling a post-task questionnaire about partner collaboration and task ease. The participants received a game token after finishing the game that would allow them to complete the questionnaire and submit the task. This token helped us link their dialogue to the responses from the questionnaire. Several initial pilots helped to define the total time required as 10 minutes for all the steps above. We set the HIT in AMT to last 20 minutes to allow additional time should any issues arise. The pilots also helped setting the payment for the workers. Initially, participants were paid a flat amount of $1.4 per dialogue. However, we found that offering a tiered payment tied to the length of the dialogue and bonus for completing the task was the most successful and cost-effective method to foster engagement and conversation: $0.5 as base for attempting the HIT, reading the instructions and completing the questionnaire. $0.15 per minute during the game, for a maximum of $0.9 for the 6 minutes. $0.2 additional bonus if the participants were able to successfully avoid the evacuation of the offshore facility. The pay per worker was therefore $1.4 for completing a whole dialogue and $1.6 for those who resolved the emergency for a 10-minute HIT. This pay is above the Federal minimum wage in the US ($7.25/hr or $0.12/min) at the time of the experiment. The post-task questionnaire had four questions rated in 7-point rating scales that are loosely based on the PARADISE BIBREF27 questions for spoken dialogue systems: Partner collaboration: “How helpful was your partner?” on a scale of 1 (not helpful at all) to 7 (very helpful). Information ease: “In this conversation, was it easy to get the information that I needed?” on a scale of 1 (no, not at all) to 7 (yes, completely). Task ease: “How easy was the task?” on a scale of 1 (very easy) to 7 (very difficult). User expertise: “In this conversation, did you know what you could say or do at each point of the dialog?” on a scale of 1 (no, not at all) to 7 (yes, completely). At the end, there was also an optional entry to give free text feedback about the task and/or their partner. Data Analysis For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4. Data Analysis ::: Subjective Data Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest. Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting. Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“. Data Analysis ::: Single vs Multiple Wizards In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment. Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts. The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate. Data Analysis ::: Limitations It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use. Data Analysis ::: Future Work In future work, we want to expand and improve the platform. Dialogue system development can greatly benefit from better ways of obtaining data for rich task-oriented domains such as ours. Part of fully exploiting the potential of crowdsourcing services lies in having readily available tools that help in the generation and gathering of data. One such tool would be a method to take a set of rules, procedures or business processes and automatically convert to a FSM, in a similar way to BIBREF28, ready to be uploaded to the Wizard interface. Regarding quality and coherence, dialogues are particularly challenging to automatically rate. In our data collection, there was not a correct or wrong dialogue option for the messages that the Emergency Assistant sent during the conversation, but some were better than others depending on the context with the Operator. This context is not easily measurable for complex tasks that depend on a dynamic world state. Therefore, we leave to future work automatically measuring dialogue quality through the use of context. The introduction of Instructional Manipulation Checks BIBREF29 before the game to filter out inattentive participants could improve the quality of the data (Crowdworkers are known for performing multiple tasks at once). Goodman2013 also recommend including screening questions that check both attention and language comprehension for AMT participants. Here, there is a balance that needs to be investigated between experience and quality of crowdworkers and the need for large numbers of participants in order to be quickly paired. We are currently exploring using the data collected to train dialogue models for the emergency response domain using Hybrid Code Networks BIBREF30. Conclusion In conclusion, this paper described a new, freely available tool to collect crowdsourced dialogues in rich task-oriented settings. By exploiting the advantages of both the Wizard-of-Oz technique and crowdsourcing services, we can effortlessly obtain dialogues for complex scenarios. The predefined dialogue options available to the Wizard intuitively guide the conversation and allow the domain to be deeply explored without the need for expert training. These predefined options also reinforce the feeling of a true Wizard-of-Oz experiment, where the participant who is not the Wizard thinks that they are interacting with a non-human agent. As the applications for task-based dialogue systems keep growing, we will see the need for systematic ways of generating dialogue corpora in varied, richer scenarios. This platform aims to be the first step towards the simplification of crowdsourcing data collections for task-oriented collaborative dialogues where the participants are working towards a shared common goal. The code for the platform and the data are also released with this publication. Acknowledgements This work was supported by the EPSRC funded ORCA Hub (EP/R026173/1, 2017-2021). Chiyah Garcia's PhD is funded under the EPSRC iCase EP/T517471/1 with Siemens.
The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.
dd2046f5481f11b7639a230e8ca92904da75feed
dd2046f5481f11b7639a230e8ca92904da75feed_0
Q: How do they combine the models? Text: Introduction Following a turbulent election season, 2016's cyber world is awash with hate speech. Automatic detection of hate speech has become an urgent need since human supervision is unable to deal with large quantities of emerging texts. Context information, by our definition, is the text, symbols or any other kind of information related to the original text. While intuitively, context accompanying hate speech is useful for detecting hate speech, context information of hate speech has been overlooked in existing datasets and automatic detection models. Online hate speech tends to be subtle and creative, which makes context especially important for automatic hate speech detection. For instance, (1) barryswallows: Merkel would never say NO This comment is posted for the News titled by "German lawmakers approve 'no means no' rape law after Cologne assaults". With context, it becomes clear that this comment is a vicious insult towards female politician. However, almost all the publicly available hate speech annotated datasets do not contain context information. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . We have created a new dataset consisting of 1528 Fox News user comments, which were taken from 10 complete discussion threads for 10 widely read Fox News articles. It is different from previous datasets from the following two perspectives. First, it preserves rich context information for each comment, including its user screen name, all comments in the same thread and the news article the comment is written for. Second, there is no biased data selection and all comments in each news comment thread were annotated. In this paper, we explored two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information in automatic hate speech detection. First, logistic regression models have been used in several prior hate speech detection studies BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF0 , BIBREF2 , BIBREF9 and various features have been tried including character-level and word-level n-gram features, syntactic features, linguistic features, and comment embedding features. However, all the features were derived from the to-be-classified text itself. In contrast, we experiment with logistic regression models using features extracted from context text as well. Second, neural network models BIBREF10 , BIBREF11 , BIBREF12 have the potential to capture compositional meanings of text, but they have not been well explored for online hate speech detection until recently BIBREF13 . We experiment with neural net models containing separate learning components that model compositional meanings of context information. Furthermore, recognizing unique strengths of each type of models, we build ensemble models of the two types of models. Evaluation shows that context-aware logistic regression models and neural net models outperform their counterparts that are blind with context information. Especially, the final ensemble models outperform a strong baseline system by around 10% in F1-score. Related Works Recently, a few datasets with human labeled hate speech have been created, however, most of existing datasets do not contain context information. Due to the sparsity of hate speech in everyday posts, researchers tend to sample candidates from bootstrapping instead of random sampling, in order to increase the chance of seeing hate speech. Therefore, the collected data instances are likely to be from distinct contexts. For instance, in the Primary Data Set described in BIBREF14 and later used by BIBREF9 , 10% of the dataset is randomly selected while the remaining consists of comments tagged by users and editors. BIBREF15 built a balanced data set of 24.5k tweets by selecting from Twitter accounts that claimed to be racist or were deemed racist using their followed news sources. BIBREF5 collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. BIBREF0 provided a corpus of 16k annotated tweets in which 3.3k are labeled as sexist and 1.9k are labeled as racist. They created this corpus by bootstrapping from certain key words ,specific hashtags and certain prolific users. BIBREF16 created a dataset of 9000 human labeled paragraphs that were collected using regular expression matching in order to find hate speech targeting Judaism and Israel. BIBREF7 extracted data instances from instagram that were associated with certain user accounts. BIBREF2 presented a very large corpus containing over 115k Wikipedia comments that include around 37k randomly sampled comments and the remaining 78k comments were selected from Wikipedia blocked comments. Most of existing hate speech detection models are feature-based and use features derived from the target text itself. BIBREF5 experimented with different classification methods including Bayesian Logistic Regression, Random Forest Decision Trees and SVMs, using features such as n-grams, reduced n-grams, dependency paths, and hateful terms. BIBREF0 proposed a logistic regression model using character n-gram features. BIBREF14 used the paragraph2vec for joint modeling of comments and words, then the generated embeddings were used as feature in a logistic regression model. BIBREF9 experimented with various syntactic, linguistic and distributional semantic features including word length, sentence length, part of speech tags, and embedding features, in order to improve performance of logistic regression classifiers. Recently, BIBREF17 surveyed current approaches for hate speech detection, which interestingly also called to attention on modeling context information for resolving difficult hate speech instances. Corpus Overview The Fox News User Comments corpus consists of 1528 annotated comments (435 labeled as hateful) that were posted by 678 different users in 10 complete news discussion threads in the Fox News website. The 10 threads were manually selected and represent popular discussion threads during August 2016. All of the comments included in these 10 threads were annotated. The number of comments in each of the 10 threads is roughly equal. Rich context information was kept for each comment, including its user screen name, the comments and their nested structure and the original news article. The data corpus along with annotation guidelines is posted on github. Annotation Guidelines Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful. Annotation Procedure We identified two native English speakers for annotating online user comments. The two annotators first discussed and practices before they started annotation. They achieved a surprisingly high Kappa score BIBREF18 of 0.98 on 648 comments from 4 threads. We think that thorough discussions in the training stage is the key for achieving this high inter-agreement. For those comments which annotators disagreed on, we label them as hateful as long as one annotator labeled them as hateful. Then one annotator continued to annotate the remaining 880 comments from the remaining six discussion threads. Characteristics in Fox News User Comments corpus Hateful comments in the Fox News User Comments Corpus is often subtle, creative and implicit. Therefore, context information is necessary in order to accurately identify such hate speech. The hatefulness of many comments depended on understanding their contexts. For instance, (3) mastersundholm: Just remember no trabjo no cervesa This comment is posted for the news "States moving to restore work requirements for food stamp recipients". This comment implies that Latino immigrants abuse the usage of food stamp policy, which is clearly a stereotyping. Many hateful comments use implicit and subtle language, which contain no clear hate indicating word or phrase. In order to recognize such hard cases, we hypothesize that neural net models are more suitable by capturing overall composite meanings of a comment. For instance, the following comment is a typical implicit stereotyping against women. (4) MarineAssassin: Hey Brianne - get in the kitchen and make me a samich. Chop Chop 11% of our annotated comments have more than 50 words each. In such long comments, the hateful indicators usually appear in a small region of a comment while the majority of the comment is neutral. For example, (5) TMmckay: I thought ...115 words... Too many blacks winning, must be racist and needs affirmative action to make whites equally win! Certain user screen names indicate hatefulness, which imply that comments posted by these users are likely to contain hate speech. In the following example, commie is a slur for communists. (6)nocommie11: Blah blah blah. Israel is the only civilized nation in the region to keep the unwashed masses at bay. Logistic Regression Models In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment. For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9 For character level n-grams, we extract character level bigrams, tri-grams and four-grams. For word level n-grams, we extract unigrams and bigrams. Linguistic Inquiry and Word Count, also called LIWC, has been proven useful for text analysis and classification BIBREF19 . In the LIWC dictionary, each word is labeled with several semantic labels. In our experiment, we use the LIWC 2015 dictionary which contain 125 semantic categories. Each word is converted into a 125 dimension LIWC vector, one dimension per semantic category. The LIWC feature vector for a comment or its context is a 125 dimension vector as well, which is the sum of all its words' LIWC vectors. NRC emotion lexicon contains a list of English words that were labeled with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and sentiment polarities (negative and positive) BIBREF20 . We use NRC emotion lexicon to capture emotion clues in text. Each word is converted into a 10 dimension emotion vector, corresponding to eight emotion types and two polarity labels. The emotion vector for a comment or its context is a 10 dimension vector as well, which is the sum of all its words' emotion vectors. As shown in table TABREF20 , given comment as the only input content, the combination of character n-grams, word n-grams, LIWC feature and NRC feature achieves the best performance. It shows that in addition to character level features, adding more features can improve hate speech detection performance. However, the improvement is limited. Compared with baseline model, the F1 score only improves 1.3%. In contrast, when context information was taken into account, the performance greatly improved. Specifically, after incorporating features extracted from the news title and username, the model performance was improved by around 4% in both F1 score and AUC score. This shows that using additional context based features in logistic regression models is useful for hate speech detection. Neural Network Models Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters. Comment is sent into a bi-directional LSTM with attention mechanism. BIBREF22 . News title and username are sent into a bi-directional LSTM. Note that we did not apply attention mechanism to the neural network models for username and news title because both types of context are relatively short and attention mechanism tends to be useful when text input is long. The three LSTM output layers are concatenated, then connected to a sigmoid layer, which outputs predictions. The number of hidden units in each LSTM used in our model is set to be 100. The recurrent dropout rate of LSTMs is set to 0.2. In addition, we use binary cross entropy as the loss function and a batch size of 128. The neural network models are trained for 30 epochs. As shown in table TABREF21 , given comment as the only input content, the bi-directional LSTM model with attention mechanism achieves the best performance. Note that the attention mechanism significantly improves the hate speech detection performance of the bi-directional LSTM model. We hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment, which is especially the case for long comments. Ensemble Models To study the difference of logistic regression model and neural network model and potentially get performance improvement, we will build and evaluate ensemble models. As shown in table TABREF24 , both ensemble models significantly improved hate speech detection performance. Figure FIGREF28 shows the system prediction results of comments that were labeled as hateful in the dataset. It can be seen that the two models perform differently. We further examined predicted comments and find that both types of models have unique strengths in identifying certain types of hateful comments. The feature-based logistic regression models are capable of making good use of character-level n-gram features, which are powerful in identifying hateful comments that contains OOV words, capitalized words or misspelled words. We provide two examples from the hateful comments that were only labeled by the logistic regression model: (7)kmawhmf:FBLM. Here FBLM means fuck Black Lives Matter. This hateful comment contains only character information which can exactly be made use of by our logistic regression model. (8)SFgunrmn: what a efen loon, but most femanazis are. This comment deliberately misspelled feminazi for femanazis, which is a derogatory term for feminists. It shows that logistic regression model is capable in dealing with misspelling. The LSTM with attention mechanism are suitable for identifying specific small regions indicating hatefulness in long comments. In addition, the neural net models are powerful in capturing implicit hateful language as well. The following are two hateful comment examples that were only identified by the neural net model: (9)freedomscout: @LarJass Many religions are poisonous to logic and truth, that much is true...and human beings still remain fallen human beings even they are Redeemed by the Sacrifice of Jesus Christ. So there's that. But the fallacies of thinking cannot be limited or attributed to religion but to error inherent in human motivation, the motivation to utter self-centeredness as fallen sinful human beings. Nearly all of the world's many religions are expressions of that utter sinful nature...Christianity and Judaism being the sole exceptions. This comment is expressing the stereotyping against religions which are not Christian or Judaism. The hatefulness is concentrated within the two bolded segments. (10)mamahattheridge: blacks Love being victims. In this comment, the four words themselves are not hateful at all. But when combined together, it is clearly hateful against black people. Evaluation We evaluate our model by 10 fold cross validation using our newly created Fox News User Comments Corpus. Both types of models use the exact same 10 folds of training data and test data. We report experimental results using multiple metrics, including accuracy, precision/recall/F1-score, and accuracy area under curve (AUC). Experimental Results Table TABREF20 shows the performance of logistic regression models. The first section of table TABREF20 shows the performance of logistic regression models using features extracted from a target comment only. The result shows that the logistic regression model was improved in every metric after adding both word-level n-gram features and lexicon derived features. However, the improvements are moderate. The second section shows the performance of logistic regression models using the four types of features extracted from both a target comment and its contextsThe result shows that the logistic regression model using features extracted from a comment and both types of context achieved the best performance and obtained improvements of 2.8% and 2.5% in AUC score and F1-score respectively. Table TABREF21 shows the performance of neural network models. The first section of table TABREF21 shows the performance of several neural network models that use comments as the only input. The model names are self-explanatory. We can see that the attention mechanism coupled with the bi-directional LSTM neural net greatly improved the online hate speech detection, by 5.7% in AUC score. The second section of table TABREF21 shows performance of the best neural net model (bi-directional LSTM with attention) after adding additional learning components that take context as input. The results show that adding username and news title can both improve model performance. Using news title gives the best F1 score while using both news title and username gives the best AUC score. Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions. We can see that both ensemble models further improved hate speech detection performance compared with using one model only and achieved the best classification performance. Compared with the logistic regression baseline, the Max Score Ensemble model improved the recall by more than 20% with a comparable precision and improved the F1 score by around 10%, in addition, the Average Score Ensemble model improved the AUC score by around 7%. Conclusion We demonstrated the importance of utilizing context information for online hate speech detection. We first presented a corpus of hateful speech consisting of full threads of online discussion posts. In addition, we presented two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information for improving hate speech detection performance. Furthermore, we show that ensemble models leveraging strengths of both types of models achieve the best performance for automatic online hate speech detection.
maximum of two scores assigned by the two separate models, average score
47e6c3e6fcc9be8ca2437f41a4fef58ef4c02579
47e6c3e6fcc9be8ca2437f41a4fef58ef4c02579_0
Q: What is their baseline? Text: Introduction Following a turbulent election season, 2016's cyber world is awash with hate speech. Automatic detection of hate speech has become an urgent need since human supervision is unable to deal with large quantities of emerging texts. Context information, by our definition, is the text, symbols or any other kind of information related to the original text. While intuitively, context accompanying hate speech is useful for detecting hate speech, context information of hate speech has been overlooked in existing datasets and automatic detection models. Online hate speech tends to be subtle and creative, which makes context especially important for automatic hate speech detection. For instance, (1) barryswallows: Merkel would never say NO This comment is posted for the News titled by "German lawmakers approve 'no means no' rape law after Cologne assaults". With context, it becomes clear that this comment is a vicious insult towards female politician. However, almost all the publicly available hate speech annotated datasets do not contain context information. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . We have created a new dataset consisting of 1528 Fox News user comments, which were taken from 10 complete discussion threads for 10 widely read Fox News articles. It is different from previous datasets from the following two perspectives. First, it preserves rich context information for each comment, including its user screen name, all comments in the same thread and the news article the comment is written for. Second, there is no biased data selection and all comments in each news comment thread were annotated. In this paper, we explored two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information in automatic hate speech detection. First, logistic regression models have been used in several prior hate speech detection studies BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF0 , BIBREF2 , BIBREF9 and various features have been tried including character-level and word-level n-gram features, syntactic features, linguistic features, and comment embedding features. However, all the features were derived from the to-be-classified text itself. In contrast, we experiment with logistic regression models using features extracted from context text as well. Second, neural network models BIBREF10 , BIBREF11 , BIBREF12 have the potential to capture compositional meanings of text, but they have not been well explored for online hate speech detection until recently BIBREF13 . We experiment with neural net models containing separate learning components that model compositional meanings of context information. Furthermore, recognizing unique strengths of each type of models, we build ensemble models of the two types of models. Evaluation shows that context-aware logistic regression models and neural net models outperform their counterparts that are blind with context information. Especially, the final ensemble models outperform a strong baseline system by around 10% in F1-score. Related Works Recently, a few datasets with human labeled hate speech have been created, however, most of existing datasets do not contain context information. Due to the sparsity of hate speech in everyday posts, researchers tend to sample candidates from bootstrapping instead of random sampling, in order to increase the chance of seeing hate speech. Therefore, the collected data instances are likely to be from distinct contexts. For instance, in the Primary Data Set described in BIBREF14 and later used by BIBREF9 , 10% of the dataset is randomly selected while the remaining consists of comments tagged by users and editors. BIBREF15 built a balanced data set of 24.5k tweets by selecting from Twitter accounts that claimed to be racist or were deemed racist using their followed news sources. BIBREF5 collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. BIBREF0 provided a corpus of 16k annotated tweets in which 3.3k are labeled as sexist and 1.9k are labeled as racist. They created this corpus by bootstrapping from certain key words ,specific hashtags and certain prolific users. BIBREF16 created a dataset of 9000 human labeled paragraphs that were collected using regular expression matching in order to find hate speech targeting Judaism and Israel. BIBREF7 extracted data instances from instagram that were associated with certain user accounts. BIBREF2 presented a very large corpus containing over 115k Wikipedia comments that include around 37k randomly sampled comments and the remaining 78k comments were selected from Wikipedia blocked comments. Most of existing hate speech detection models are feature-based and use features derived from the target text itself. BIBREF5 experimented with different classification methods including Bayesian Logistic Regression, Random Forest Decision Trees and SVMs, using features such as n-grams, reduced n-grams, dependency paths, and hateful terms. BIBREF0 proposed a logistic regression model using character n-gram features. BIBREF14 used the paragraph2vec for joint modeling of comments and words, then the generated embeddings were used as feature in a logistic regression model. BIBREF9 experimented with various syntactic, linguistic and distributional semantic features including word length, sentence length, part of speech tags, and embedding features, in order to improve performance of logistic regression classifiers. Recently, BIBREF17 surveyed current approaches for hate speech detection, which interestingly also called to attention on modeling context information for resolving difficult hate speech instances. Corpus Overview The Fox News User Comments corpus consists of 1528 annotated comments (435 labeled as hateful) that were posted by 678 different users in 10 complete news discussion threads in the Fox News website. The 10 threads were manually selected and represent popular discussion threads during August 2016. All of the comments included in these 10 threads were annotated. The number of comments in each of the 10 threads is roughly equal. Rich context information was kept for each comment, including its user screen name, the comments and their nested structure and the original news article. The data corpus along with annotation guidelines is posted on github. Annotation Guidelines Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful. Annotation Procedure We identified two native English speakers for annotating online user comments. The two annotators first discussed and practices before they started annotation. They achieved a surprisingly high Kappa score BIBREF18 of 0.98 on 648 comments from 4 threads. We think that thorough discussions in the training stage is the key for achieving this high inter-agreement. For those comments which annotators disagreed on, we label them as hateful as long as one annotator labeled them as hateful. Then one annotator continued to annotate the remaining 880 comments from the remaining six discussion threads. Characteristics in Fox News User Comments corpus Hateful comments in the Fox News User Comments Corpus is often subtle, creative and implicit. Therefore, context information is necessary in order to accurately identify such hate speech. The hatefulness of many comments depended on understanding their contexts. For instance, (3) mastersundholm: Just remember no trabjo no cervesa This comment is posted for the news "States moving to restore work requirements for food stamp recipients". This comment implies that Latino immigrants abuse the usage of food stamp policy, which is clearly a stereotyping. Many hateful comments use implicit and subtle language, which contain no clear hate indicating word or phrase. In order to recognize such hard cases, we hypothesize that neural net models are more suitable by capturing overall composite meanings of a comment. For instance, the following comment is a typical implicit stereotyping against women. (4) MarineAssassin: Hey Brianne - get in the kitchen and make me a samich. Chop Chop 11% of our annotated comments have more than 50 words each. In such long comments, the hateful indicators usually appear in a small region of a comment while the majority of the comment is neutral. For example, (5) TMmckay: I thought ...115 words... Too many blacks winning, must be racist and needs affirmative action to make whites equally win! Certain user screen names indicate hatefulness, which imply that comments posted by these users are likely to contain hate speech. In the following example, commie is a slur for communists. (6)nocommie11: Blah blah blah. Israel is the only civilized nation in the region to keep the unwashed masses at bay. Logistic Regression Models In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment. For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9 For character level n-grams, we extract character level bigrams, tri-grams and four-grams. For word level n-grams, we extract unigrams and bigrams. Linguistic Inquiry and Word Count, also called LIWC, has been proven useful for text analysis and classification BIBREF19 . In the LIWC dictionary, each word is labeled with several semantic labels. In our experiment, we use the LIWC 2015 dictionary which contain 125 semantic categories. Each word is converted into a 125 dimension LIWC vector, one dimension per semantic category. The LIWC feature vector for a comment or its context is a 125 dimension vector as well, which is the sum of all its words' LIWC vectors. NRC emotion lexicon contains a list of English words that were labeled with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and sentiment polarities (negative and positive) BIBREF20 . We use NRC emotion lexicon to capture emotion clues in text. Each word is converted into a 10 dimension emotion vector, corresponding to eight emotion types and two polarity labels. The emotion vector for a comment or its context is a 10 dimension vector as well, which is the sum of all its words' emotion vectors. As shown in table TABREF20 , given comment as the only input content, the combination of character n-grams, word n-grams, LIWC feature and NRC feature achieves the best performance. It shows that in addition to character level features, adding more features can improve hate speech detection performance. However, the improvement is limited. Compared with baseline model, the F1 score only improves 1.3%. In contrast, when context information was taken into account, the performance greatly improved. Specifically, after incorporating features extracted from the news title and username, the model performance was improved by around 4% in both F1 score and AUC score. This shows that using additional context based features in logistic regression models is useful for hate speech detection. Neural Network Models Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters. Comment is sent into a bi-directional LSTM with attention mechanism. BIBREF22 . News title and username are sent into a bi-directional LSTM. Note that we did not apply attention mechanism to the neural network models for username and news title because both types of context are relatively short and attention mechanism tends to be useful when text input is long. The three LSTM output layers are concatenated, then connected to a sigmoid layer, which outputs predictions. The number of hidden units in each LSTM used in our model is set to be 100. The recurrent dropout rate of LSTMs is set to 0.2. In addition, we use binary cross entropy as the loss function and a batch size of 128. The neural network models are trained for 30 epochs. As shown in table TABREF21 , given comment as the only input content, the bi-directional LSTM model with attention mechanism achieves the best performance. Note that the attention mechanism significantly improves the hate speech detection performance of the bi-directional LSTM model. We hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment, which is especially the case for long comments. Ensemble Models To study the difference of logistic regression model and neural network model and potentially get performance improvement, we will build and evaluate ensemble models. As shown in table TABREF24 , both ensemble models significantly improved hate speech detection performance. Figure FIGREF28 shows the system prediction results of comments that were labeled as hateful in the dataset. It can be seen that the two models perform differently. We further examined predicted comments and find that both types of models have unique strengths in identifying certain types of hateful comments. The feature-based logistic regression models are capable of making good use of character-level n-gram features, which are powerful in identifying hateful comments that contains OOV words, capitalized words or misspelled words. We provide two examples from the hateful comments that were only labeled by the logistic regression model: (7)kmawhmf:FBLM. Here FBLM means fuck Black Lives Matter. This hateful comment contains only character information which can exactly be made use of by our logistic regression model. (8)SFgunrmn: what a efen loon, but most femanazis are. This comment deliberately misspelled feminazi for femanazis, which is a derogatory term for feminists. It shows that logistic regression model is capable in dealing with misspelling. The LSTM with attention mechanism are suitable for identifying specific small regions indicating hatefulness in long comments. In addition, the neural net models are powerful in capturing implicit hateful language as well. The following are two hateful comment examples that were only identified by the neural net model: (9)freedomscout: @LarJass Many religions are poisonous to logic and truth, that much is true...and human beings still remain fallen human beings even they are Redeemed by the Sacrifice of Jesus Christ. So there's that. But the fallacies of thinking cannot be limited or attributed to religion but to error inherent in human motivation, the motivation to utter self-centeredness as fallen sinful human beings. Nearly all of the world's many religions are expressions of that utter sinful nature...Christianity and Judaism being the sole exceptions. This comment is expressing the stereotyping against religions which are not Christian or Judaism. The hatefulness is concentrated within the two bolded segments. (10)mamahattheridge: blacks Love being victims. In this comment, the four words themselves are not hateful at all. But when combined together, it is clearly hateful against black people. Evaluation We evaluate our model by 10 fold cross validation using our newly created Fox News User Comments Corpus. Both types of models use the exact same 10 folds of training data and test data. We report experimental results using multiple metrics, including accuracy, precision/recall/F1-score, and accuracy area under curve (AUC). Experimental Results Table TABREF20 shows the performance of logistic regression models. The first section of table TABREF20 shows the performance of logistic regression models using features extracted from a target comment only. The result shows that the logistic regression model was improved in every metric after adding both word-level n-gram features and lexicon derived features. However, the improvements are moderate. The second section shows the performance of logistic regression models using the four types of features extracted from both a target comment and its contextsThe result shows that the logistic regression model using features extracted from a comment and both types of context achieved the best performance and obtained improvements of 2.8% and 2.5% in AUC score and F1-score respectively. Table TABREF21 shows the performance of neural network models. The first section of table TABREF21 shows the performance of several neural network models that use comments as the only input. The model names are self-explanatory. We can see that the attention mechanism coupled with the bi-directional LSTM neural net greatly improved the online hate speech detection, by 5.7% in AUC score. The second section of table TABREF21 shows performance of the best neural net model (bi-directional LSTM with attention) after adding additional learning components that take context as input. The results show that adding username and news title can both improve model performance. Using news title gives the best F1 score while using both news title and username gives the best AUC score. Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions. We can see that both ensemble models further improved hate speech detection performance compared with using one model only and achieved the best classification performance. Compared with the logistic regression baseline, the Max Score Ensemble model improved the recall by more than 20% with a comparable precision and improved the F1 score by around 10%, in addition, the Average Score Ensemble model improved the AUC score by around 7%. Conclusion We demonstrated the importance of utilizing context information for online hate speech detection. We first presented a corpus of hateful speech consisting of full threads of online discussion posts. In addition, we presented two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information for improving hate speech detection performance. Furthermore, we show that ensemble models leveraging strengths of both types of models achieve the best performance for automatic online hate speech detection.
Logistic regression model with character-level n-gram features
569ad21441e99ae782d325d5f5e1ac19e08d5e76
569ad21441e99ae782d325d5f5e1ac19e08d5e76_0
Q: What context do they use? Text: Introduction Following a turbulent election season, 2016's cyber world is awash with hate speech. Automatic detection of hate speech has become an urgent need since human supervision is unable to deal with large quantities of emerging texts. Context information, by our definition, is the text, symbols or any other kind of information related to the original text. While intuitively, context accompanying hate speech is useful for detecting hate speech, context information of hate speech has been overlooked in existing datasets and automatic detection models. Online hate speech tends to be subtle and creative, which makes context especially important for automatic hate speech detection. For instance, (1) barryswallows: Merkel would never say NO This comment is posted for the News titled by "German lawmakers approve 'no means no' rape law after Cologne assaults". With context, it becomes clear that this comment is a vicious insult towards female politician. However, almost all the publicly available hate speech annotated datasets do not contain context information. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . We have created a new dataset consisting of 1528 Fox News user comments, which were taken from 10 complete discussion threads for 10 widely read Fox News articles. It is different from previous datasets from the following two perspectives. First, it preserves rich context information for each comment, including its user screen name, all comments in the same thread and the news article the comment is written for. Second, there is no biased data selection and all comments in each news comment thread were annotated. In this paper, we explored two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information in automatic hate speech detection. First, logistic regression models have been used in several prior hate speech detection studies BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF0 , BIBREF2 , BIBREF9 and various features have been tried including character-level and word-level n-gram features, syntactic features, linguistic features, and comment embedding features. However, all the features were derived from the to-be-classified text itself. In contrast, we experiment with logistic regression models using features extracted from context text as well. Second, neural network models BIBREF10 , BIBREF11 , BIBREF12 have the potential to capture compositional meanings of text, but they have not been well explored for online hate speech detection until recently BIBREF13 . We experiment with neural net models containing separate learning components that model compositional meanings of context information. Furthermore, recognizing unique strengths of each type of models, we build ensemble models of the two types of models. Evaluation shows that context-aware logistic regression models and neural net models outperform their counterparts that are blind with context information. Especially, the final ensemble models outperform a strong baseline system by around 10% in F1-score. Related Works Recently, a few datasets with human labeled hate speech have been created, however, most of existing datasets do not contain context information. Due to the sparsity of hate speech in everyday posts, researchers tend to sample candidates from bootstrapping instead of random sampling, in order to increase the chance of seeing hate speech. Therefore, the collected data instances are likely to be from distinct contexts. For instance, in the Primary Data Set described in BIBREF14 and later used by BIBREF9 , 10% of the dataset is randomly selected while the remaining consists of comments tagged by users and editors. BIBREF15 built a balanced data set of 24.5k tweets by selecting from Twitter accounts that claimed to be racist or were deemed racist using their followed news sources. BIBREF5 collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. BIBREF0 provided a corpus of 16k annotated tweets in which 3.3k are labeled as sexist and 1.9k are labeled as racist. They created this corpus by bootstrapping from certain key words ,specific hashtags and certain prolific users. BIBREF16 created a dataset of 9000 human labeled paragraphs that were collected using regular expression matching in order to find hate speech targeting Judaism and Israel. BIBREF7 extracted data instances from instagram that were associated with certain user accounts. BIBREF2 presented a very large corpus containing over 115k Wikipedia comments that include around 37k randomly sampled comments and the remaining 78k comments were selected from Wikipedia blocked comments. Most of existing hate speech detection models are feature-based and use features derived from the target text itself. BIBREF5 experimented with different classification methods including Bayesian Logistic Regression, Random Forest Decision Trees and SVMs, using features such as n-grams, reduced n-grams, dependency paths, and hateful terms. BIBREF0 proposed a logistic regression model using character n-gram features. BIBREF14 used the paragraph2vec for joint modeling of comments and words, then the generated embeddings were used as feature in a logistic regression model. BIBREF9 experimented with various syntactic, linguistic and distributional semantic features including word length, sentence length, part of speech tags, and embedding features, in order to improve performance of logistic regression classifiers. Recently, BIBREF17 surveyed current approaches for hate speech detection, which interestingly also called to attention on modeling context information for resolving difficult hate speech instances. Corpus Overview The Fox News User Comments corpus consists of 1528 annotated comments (435 labeled as hateful) that were posted by 678 different users in 10 complete news discussion threads in the Fox News website. The 10 threads were manually selected and represent popular discussion threads during August 2016. All of the comments included in these 10 threads were annotated. The number of comments in each of the 10 threads is roughly equal. Rich context information was kept for each comment, including its user screen name, the comments and their nested structure and the original news article. The data corpus along with annotation guidelines is posted on github. Annotation Guidelines Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful. Annotation Procedure We identified two native English speakers for annotating online user comments. The two annotators first discussed and practices before they started annotation. They achieved a surprisingly high Kappa score BIBREF18 of 0.98 on 648 comments from 4 threads. We think that thorough discussions in the training stage is the key for achieving this high inter-agreement. For those comments which annotators disagreed on, we label them as hateful as long as one annotator labeled them as hateful. Then one annotator continued to annotate the remaining 880 comments from the remaining six discussion threads. Characteristics in Fox News User Comments corpus Hateful comments in the Fox News User Comments Corpus is often subtle, creative and implicit. Therefore, context information is necessary in order to accurately identify such hate speech. The hatefulness of many comments depended on understanding their contexts. For instance, (3) mastersundholm: Just remember no trabjo no cervesa This comment is posted for the news "States moving to restore work requirements for food stamp recipients". This comment implies that Latino immigrants abuse the usage of food stamp policy, which is clearly a stereotyping. Many hateful comments use implicit and subtle language, which contain no clear hate indicating word or phrase. In order to recognize such hard cases, we hypothesize that neural net models are more suitable by capturing overall composite meanings of a comment. For instance, the following comment is a typical implicit stereotyping against women. (4) MarineAssassin: Hey Brianne - get in the kitchen and make me a samich. Chop Chop 11% of our annotated comments have more than 50 words each. In such long comments, the hateful indicators usually appear in a small region of a comment while the majority of the comment is neutral. For example, (5) TMmckay: I thought ...115 words... Too many blacks winning, must be racist and needs affirmative action to make whites equally win! Certain user screen names indicate hatefulness, which imply that comments posted by these users are likely to contain hate speech. In the following example, commie is a slur for communists. (6)nocommie11: Blah blah blah. Israel is the only civilized nation in the region to keep the unwashed masses at bay. Logistic Regression Models In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment. For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9 For character level n-grams, we extract character level bigrams, tri-grams and four-grams. For word level n-grams, we extract unigrams and bigrams. Linguistic Inquiry and Word Count, also called LIWC, has been proven useful for text analysis and classification BIBREF19 . In the LIWC dictionary, each word is labeled with several semantic labels. In our experiment, we use the LIWC 2015 dictionary which contain 125 semantic categories. Each word is converted into a 125 dimension LIWC vector, one dimension per semantic category. The LIWC feature vector for a comment or its context is a 125 dimension vector as well, which is the sum of all its words' LIWC vectors. NRC emotion lexicon contains a list of English words that were labeled with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and sentiment polarities (negative and positive) BIBREF20 . We use NRC emotion lexicon to capture emotion clues in text. Each word is converted into a 10 dimension emotion vector, corresponding to eight emotion types and two polarity labels. The emotion vector for a comment or its context is a 10 dimension vector as well, which is the sum of all its words' emotion vectors. As shown in table TABREF20 , given comment as the only input content, the combination of character n-grams, word n-grams, LIWC feature and NRC feature achieves the best performance. It shows that in addition to character level features, adding more features can improve hate speech detection performance. However, the improvement is limited. Compared with baseline model, the F1 score only improves 1.3%. In contrast, when context information was taken into account, the performance greatly improved. Specifically, after incorporating features extracted from the news title and username, the model performance was improved by around 4% in both F1 score and AUC score. This shows that using additional context based features in logistic regression models is useful for hate speech detection. Neural Network Models Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters. Comment is sent into a bi-directional LSTM with attention mechanism. BIBREF22 . News title and username are sent into a bi-directional LSTM. Note that we did not apply attention mechanism to the neural network models for username and news title because both types of context are relatively short and attention mechanism tends to be useful when text input is long. The three LSTM output layers are concatenated, then connected to a sigmoid layer, which outputs predictions. The number of hidden units in each LSTM used in our model is set to be 100. The recurrent dropout rate of LSTMs is set to 0.2. In addition, we use binary cross entropy as the loss function and a batch size of 128. The neural network models are trained for 30 epochs. As shown in table TABREF21 , given comment as the only input content, the bi-directional LSTM model with attention mechanism achieves the best performance. Note that the attention mechanism significantly improves the hate speech detection performance of the bi-directional LSTM model. We hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment, which is especially the case for long comments. Ensemble Models To study the difference of logistic regression model and neural network model and potentially get performance improvement, we will build and evaluate ensemble models. As shown in table TABREF24 , both ensemble models significantly improved hate speech detection performance. Figure FIGREF28 shows the system prediction results of comments that were labeled as hateful in the dataset. It can be seen that the two models perform differently. We further examined predicted comments and find that both types of models have unique strengths in identifying certain types of hateful comments. The feature-based logistic regression models are capable of making good use of character-level n-gram features, which are powerful in identifying hateful comments that contains OOV words, capitalized words or misspelled words. We provide two examples from the hateful comments that were only labeled by the logistic regression model: (7)kmawhmf:FBLM. Here FBLM means fuck Black Lives Matter. This hateful comment contains only character information which can exactly be made use of by our logistic regression model. (8)SFgunrmn: what a efen loon, but most femanazis are. This comment deliberately misspelled feminazi for femanazis, which is a derogatory term for feminists. It shows that logistic regression model is capable in dealing with misspelling. The LSTM with attention mechanism are suitable for identifying specific small regions indicating hatefulness in long comments. In addition, the neural net models are powerful in capturing implicit hateful language as well. The following are two hateful comment examples that were only identified by the neural net model: (9)freedomscout: @LarJass Many religions are poisonous to logic and truth, that much is true...and human beings still remain fallen human beings even they are Redeemed by the Sacrifice of Jesus Christ. So there's that. But the fallacies of thinking cannot be limited or attributed to religion but to error inherent in human motivation, the motivation to utter self-centeredness as fallen sinful human beings. Nearly all of the world's many religions are expressions of that utter sinful nature...Christianity and Judaism being the sole exceptions. This comment is expressing the stereotyping against religions which are not Christian or Judaism. The hatefulness is concentrated within the two bolded segments. (10)mamahattheridge: blacks Love being victims. In this comment, the four words themselves are not hateful at all. But when combined together, it is clearly hateful against black people. Evaluation We evaluate our model by 10 fold cross validation using our newly created Fox News User Comments Corpus. Both types of models use the exact same 10 folds of training data and test data. We report experimental results using multiple metrics, including accuracy, precision/recall/F1-score, and accuracy area under curve (AUC). Experimental Results Table TABREF20 shows the performance of logistic regression models. The first section of table TABREF20 shows the performance of logistic regression models using features extracted from a target comment only. The result shows that the logistic regression model was improved in every metric after adding both word-level n-gram features and lexicon derived features. However, the improvements are moderate. The second section shows the performance of logistic regression models using the four types of features extracted from both a target comment and its contextsThe result shows that the logistic regression model using features extracted from a comment and both types of context achieved the best performance and obtained improvements of 2.8% and 2.5% in AUC score and F1-score respectively. Table TABREF21 shows the performance of neural network models. The first section of table TABREF21 shows the performance of several neural network models that use comments as the only input. The model names are self-explanatory. We can see that the attention mechanism coupled with the bi-directional LSTM neural net greatly improved the online hate speech detection, by 5.7% in AUC score. The second section of table TABREF21 shows performance of the best neural net model (bi-directional LSTM with attention) after adding additional learning components that take context as input. The results show that adding username and news title can both improve model performance. Using news title gives the best F1 score while using both news title and username gives the best AUC score. Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions. We can see that both ensemble models further improved hate speech detection performance compared with using one model only and achieved the best classification performance. Compared with the logistic regression baseline, the Max Score Ensemble model improved the recall by more than 20% with a comparable precision and improved the F1 score by around 10%, in addition, the Average Score Ensemble model improved the AUC score by around 7%. Conclusion We demonstrated the importance of utilizing context information for online hate speech detection. We first presented a corpus of hateful speech consisting of full threads of online discussion posts. In addition, we presented two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information for improving hate speech detection performance. Furthermore, we show that ensemble models leveraging strengths of both types of models achieve the best performance for automatic online hate speech detection.
title of the news article, screen name of the user
90741b227b25c42e0b81a08c279b94598a25119d
90741b227b25c42e0b81a08c279b94598a25119d_0
Q: What is their definition of hate speech? Text: Introduction Following a turbulent election season, 2016's cyber world is awash with hate speech. Automatic detection of hate speech has become an urgent need since human supervision is unable to deal with large quantities of emerging texts. Context information, by our definition, is the text, symbols or any other kind of information related to the original text. While intuitively, context accompanying hate speech is useful for detecting hate speech, context information of hate speech has been overlooked in existing datasets and automatic detection models. Online hate speech tends to be subtle and creative, which makes context especially important for automatic hate speech detection. For instance, (1) barryswallows: Merkel would never say NO This comment is posted for the News titled by "German lawmakers approve 'no means no' rape law after Cologne assaults". With context, it becomes clear that this comment is a vicious insult towards female politician. However, almost all the publicly available hate speech annotated datasets do not contain context information. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . We have created a new dataset consisting of 1528 Fox News user comments, which were taken from 10 complete discussion threads for 10 widely read Fox News articles. It is different from previous datasets from the following two perspectives. First, it preserves rich context information for each comment, including its user screen name, all comments in the same thread and the news article the comment is written for. Second, there is no biased data selection and all comments in each news comment thread were annotated. In this paper, we explored two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information in automatic hate speech detection. First, logistic regression models have been used in several prior hate speech detection studies BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF0 , BIBREF2 , BIBREF9 and various features have been tried including character-level and word-level n-gram features, syntactic features, linguistic features, and comment embedding features. However, all the features were derived from the to-be-classified text itself. In contrast, we experiment with logistic regression models using features extracted from context text as well. Second, neural network models BIBREF10 , BIBREF11 , BIBREF12 have the potential to capture compositional meanings of text, but they have not been well explored for online hate speech detection until recently BIBREF13 . We experiment with neural net models containing separate learning components that model compositional meanings of context information. Furthermore, recognizing unique strengths of each type of models, we build ensemble models of the two types of models. Evaluation shows that context-aware logistic regression models and neural net models outperform their counterparts that are blind with context information. Especially, the final ensemble models outperform a strong baseline system by around 10% in F1-score. Related Works Recently, a few datasets with human labeled hate speech have been created, however, most of existing datasets do not contain context information. Due to the sparsity of hate speech in everyday posts, researchers tend to sample candidates from bootstrapping instead of random sampling, in order to increase the chance of seeing hate speech. Therefore, the collected data instances are likely to be from distinct contexts. For instance, in the Primary Data Set described in BIBREF14 and later used by BIBREF9 , 10% of the dataset is randomly selected while the remaining consists of comments tagged by users and editors. BIBREF15 built a balanced data set of 24.5k tweets by selecting from Twitter accounts that claimed to be racist or were deemed racist using their followed news sources. BIBREF5 collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. BIBREF0 provided a corpus of 16k annotated tweets in which 3.3k are labeled as sexist and 1.9k are labeled as racist. They created this corpus by bootstrapping from certain key words ,specific hashtags and certain prolific users. BIBREF16 created a dataset of 9000 human labeled paragraphs that were collected using regular expression matching in order to find hate speech targeting Judaism and Israel. BIBREF7 extracted data instances from instagram that were associated with certain user accounts. BIBREF2 presented a very large corpus containing over 115k Wikipedia comments that include around 37k randomly sampled comments and the remaining 78k comments were selected from Wikipedia blocked comments. Most of existing hate speech detection models are feature-based and use features derived from the target text itself. BIBREF5 experimented with different classification methods including Bayesian Logistic Regression, Random Forest Decision Trees and SVMs, using features such as n-grams, reduced n-grams, dependency paths, and hateful terms. BIBREF0 proposed a logistic regression model using character n-gram features. BIBREF14 used the paragraph2vec for joint modeling of comments and words, then the generated embeddings were used as feature in a logistic regression model. BIBREF9 experimented with various syntactic, linguistic and distributional semantic features including word length, sentence length, part of speech tags, and embedding features, in order to improve performance of logistic regression classifiers. Recently, BIBREF17 surveyed current approaches for hate speech detection, which interestingly also called to attention on modeling context information for resolving difficult hate speech instances. Corpus Overview The Fox News User Comments corpus consists of 1528 annotated comments (435 labeled as hateful) that were posted by 678 different users in 10 complete news discussion threads in the Fox News website. The 10 threads were manually selected and represent popular discussion threads during August 2016. All of the comments included in these 10 threads were annotated. The number of comments in each of the 10 threads is roughly equal. Rich context information was kept for each comment, including its user screen name, the comments and their nested structure and the original news article. The data corpus along with annotation guidelines is posted on github. Annotation Guidelines Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful. Annotation Procedure We identified two native English speakers for annotating online user comments. The two annotators first discussed and practices before they started annotation. They achieved a surprisingly high Kappa score BIBREF18 of 0.98 on 648 comments from 4 threads. We think that thorough discussions in the training stage is the key for achieving this high inter-agreement. For those comments which annotators disagreed on, we label them as hateful as long as one annotator labeled them as hateful. Then one annotator continued to annotate the remaining 880 comments from the remaining six discussion threads. Characteristics in Fox News User Comments corpus Hateful comments in the Fox News User Comments Corpus is often subtle, creative and implicit. Therefore, context information is necessary in order to accurately identify such hate speech. The hatefulness of many comments depended on understanding their contexts. For instance, (3) mastersundholm: Just remember no trabjo no cervesa This comment is posted for the news "States moving to restore work requirements for food stamp recipients". This comment implies that Latino immigrants abuse the usage of food stamp policy, which is clearly a stereotyping. Many hateful comments use implicit and subtle language, which contain no clear hate indicating word or phrase. In order to recognize such hard cases, we hypothesize that neural net models are more suitable by capturing overall composite meanings of a comment. For instance, the following comment is a typical implicit stereotyping against women. (4) MarineAssassin: Hey Brianne - get in the kitchen and make me a samich. Chop Chop 11% of our annotated comments have more than 50 words each. In such long comments, the hateful indicators usually appear in a small region of a comment while the majority of the comment is neutral. For example, (5) TMmckay: I thought ...115 words... Too many blacks winning, must be racist and needs affirmative action to make whites equally win! Certain user screen names indicate hatefulness, which imply that comments posted by these users are likely to contain hate speech. In the following example, commie is a slur for communists. (6)nocommie11: Blah blah blah. Israel is the only civilized nation in the region to keep the unwashed masses at bay. Logistic Regression Models In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment. For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9 For character level n-grams, we extract character level bigrams, tri-grams and four-grams. For word level n-grams, we extract unigrams and bigrams. Linguistic Inquiry and Word Count, also called LIWC, has been proven useful for text analysis and classification BIBREF19 . In the LIWC dictionary, each word is labeled with several semantic labels. In our experiment, we use the LIWC 2015 dictionary which contain 125 semantic categories. Each word is converted into a 125 dimension LIWC vector, one dimension per semantic category. The LIWC feature vector for a comment or its context is a 125 dimension vector as well, which is the sum of all its words' LIWC vectors. NRC emotion lexicon contains a list of English words that were labeled with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and sentiment polarities (negative and positive) BIBREF20 . We use NRC emotion lexicon to capture emotion clues in text. Each word is converted into a 10 dimension emotion vector, corresponding to eight emotion types and two polarity labels. The emotion vector for a comment or its context is a 10 dimension vector as well, which is the sum of all its words' emotion vectors. As shown in table TABREF20 , given comment as the only input content, the combination of character n-grams, word n-grams, LIWC feature and NRC feature achieves the best performance. It shows that in addition to character level features, adding more features can improve hate speech detection performance. However, the improvement is limited. Compared with baseline model, the F1 score only improves 1.3%. In contrast, when context information was taken into account, the performance greatly improved. Specifically, after incorporating features extracted from the news title and username, the model performance was improved by around 4% in both F1 score and AUC score. This shows that using additional context based features in logistic regression models is useful for hate speech detection. Neural Network Models Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters. Comment is sent into a bi-directional LSTM with attention mechanism. BIBREF22 . News title and username are sent into a bi-directional LSTM. Note that we did not apply attention mechanism to the neural network models for username and news title because both types of context are relatively short and attention mechanism tends to be useful when text input is long. The three LSTM output layers are concatenated, then connected to a sigmoid layer, which outputs predictions. The number of hidden units in each LSTM used in our model is set to be 100. The recurrent dropout rate of LSTMs is set to 0.2. In addition, we use binary cross entropy as the loss function and a batch size of 128. The neural network models are trained for 30 epochs. As shown in table TABREF21 , given comment as the only input content, the bi-directional LSTM model with attention mechanism achieves the best performance. Note that the attention mechanism significantly improves the hate speech detection performance of the bi-directional LSTM model. We hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment, which is especially the case for long comments. Ensemble Models To study the difference of logistic regression model and neural network model and potentially get performance improvement, we will build and evaluate ensemble models. As shown in table TABREF24 , both ensemble models significantly improved hate speech detection performance. Figure FIGREF28 shows the system prediction results of comments that were labeled as hateful in the dataset. It can be seen that the two models perform differently. We further examined predicted comments and find that both types of models have unique strengths in identifying certain types of hateful comments. The feature-based logistic regression models are capable of making good use of character-level n-gram features, which are powerful in identifying hateful comments that contains OOV words, capitalized words or misspelled words. We provide two examples from the hateful comments that were only labeled by the logistic regression model: (7)kmawhmf:FBLM. Here FBLM means fuck Black Lives Matter. This hateful comment contains only character information which can exactly be made use of by our logistic regression model. (8)SFgunrmn: what a efen loon, but most femanazis are. This comment deliberately misspelled feminazi for femanazis, which is a derogatory term for feminists. It shows that logistic regression model is capable in dealing with misspelling. The LSTM with attention mechanism are suitable for identifying specific small regions indicating hatefulness in long comments. In addition, the neural net models are powerful in capturing implicit hateful language as well. The following are two hateful comment examples that were only identified by the neural net model: (9)freedomscout: @LarJass Many religions are poisonous to logic and truth, that much is true...and human beings still remain fallen human beings even they are Redeemed by the Sacrifice of Jesus Christ. So there's that. But the fallacies of thinking cannot be limited or attributed to religion but to error inherent in human motivation, the motivation to utter self-centeredness as fallen sinful human beings. Nearly all of the world's many religions are expressions of that utter sinful nature...Christianity and Judaism being the sole exceptions. This comment is expressing the stereotyping against religions which are not Christian or Judaism. The hatefulness is concentrated within the two bolded segments. (10)mamahattheridge: blacks Love being victims. In this comment, the four words themselves are not hateful at all. But when combined together, it is clearly hateful against black people. Evaluation We evaluate our model by 10 fold cross validation using our newly created Fox News User Comments Corpus. Both types of models use the exact same 10 folds of training data and test data. We report experimental results using multiple metrics, including accuracy, precision/recall/F1-score, and accuracy area under curve (AUC). Experimental Results Table TABREF20 shows the performance of logistic regression models. The first section of table TABREF20 shows the performance of logistic regression models using features extracted from a target comment only. The result shows that the logistic regression model was improved in every metric after adding both word-level n-gram features and lexicon derived features. However, the improvements are moderate. The second section shows the performance of logistic regression models using the four types of features extracted from both a target comment and its contextsThe result shows that the logistic regression model using features extracted from a comment and both types of context achieved the best performance and obtained improvements of 2.8% and 2.5% in AUC score and F1-score respectively. Table TABREF21 shows the performance of neural network models. The first section of table TABREF21 shows the performance of several neural network models that use comments as the only input. The model names are self-explanatory. We can see that the attention mechanism coupled with the bi-directional LSTM neural net greatly improved the online hate speech detection, by 5.7% in AUC score. The second section of table TABREF21 shows performance of the best neural net model (bi-directional LSTM with attention) after adding additional learning components that take context as input. The results show that adding username and news title can both improve model performance. Using news title gives the best F1 score while using both news title and username gives the best AUC score. Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions. We can see that both ensemble models further improved hate speech detection performance compared with using one model only and achieved the best classification performance. Compared with the logistic regression baseline, the Max Score Ensemble model improved the recall by more than 20% with a comparable precision and improved the F1 score by around 10%, in addition, the Average Score Ensemble model improved the AUC score by around 7%. Conclusion We demonstrated the importance of utilizing context information for online hate speech detection. We first presented a corpus of hateful speech consisting of full threads of online discussion posts. In addition, we presented two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information for improving hate speech detection performance. Furthermore, we show that ensemble models leveraging strengths of both types of models achieve the best performance for automatic online hate speech detection.
language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation
1d739bb8e5d887fdfd1f4b6e39c57695c042fa25
1d739bb8e5d887fdfd1f4b6e39c57695c042fa25_0
Q: What architecture has the neural network? Text: Introduction Following a turbulent election season, 2016's cyber world is awash with hate speech. Automatic detection of hate speech has become an urgent need since human supervision is unable to deal with large quantities of emerging texts. Context information, by our definition, is the text, symbols or any other kind of information related to the original text. While intuitively, context accompanying hate speech is useful for detecting hate speech, context information of hate speech has been overlooked in existing datasets and automatic detection models. Online hate speech tends to be subtle and creative, which makes context especially important for automatic hate speech detection. For instance, (1) barryswallows: Merkel would never say NO This comment is posted for the News titled by "German lawmakers approve 'no means no' rape law after Cologne assaults". With context, it becomes clear that this comment is a vicious insult towards female politician. However, almost all the publicly available hate speech annotated datasets do not contain context information. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . We have created a new dataset consisting of 1528 Fox News user comments, which were taken from 10 complete discussion threads for 10 widely read Fox News articles. It is different from previous datasets from the following two perspectives. First, it preserves rich context information for each comment, including its user screen name, all comments in the same thread and the news article the comment is written for. Second, there is no biased data selection and all comments in each news comment thread were annotated. In this paper, we explored two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information in automatic hate speech detection. First, logistic regression models have been used in several prior hate speech detection studies BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF0 , BIBREF2 , BIBREF9 and various features have been tried including character-level and word-level n-gram features, syntactic features, linguistic features, and comment embedding features. However, all the features were derived from the to-be-classified text itself. In contrast, we experiment with logistic regression models using features extracted from context text as well. Second, neural network models BIBREF10 , BIBREF11 , BIBREF12 have the potential to capture compositional meanings of text, but they have not been well explored for online hate speech detection until recently BIBREF13 . We experiment with neural net models containing separate learning components that model compositional meanings of context information. Furthermore, recognizing unique strengths of each type of models, we build ensemble models of the two types of models. Evaluation shows that context-aware logistic regression models and neural net models outperform their counterparts that are blind with context information. Especially, the final ensemble models outperform a strong baseline system by around 10% in F1-score. Related Works Recently, a few datasets with human labeled hate speech have been created, however, most of existing datasets do not contain context information. Due to the sparsity of hate speech in everyday posts, researchers tend to sample candidates from bootstrapping instead of random sampling, in order to increase the chance of seeing hate speech. Therefore, the collected data instances are likely to be from distinct contexts. For instance, in the Primary Data Set described in BIBREF14 and later used by BIBREF9 , 10% of the dataset is randomly selected while the remaining consists of comments tagged by users and editors. BIBREF15 built a balanced data set of 24.5k tweets by selecting from Twitter accounts that claimed to be racist or were deemed racist using their followed news sources. BIBREF5 collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. BIBREF0 provided a corpus of 16k annotated tweets in which 3.3k are labeled as sexist and 1.9k are labeled as racist. They created this corpus by bootstrapping from certain key words ,specific hashtags and certain prolific users. BIBREF16 created a dataset of 9000 human labeled paragraphs that were collected using regular expression matching in order to find hate speech targeting Judaism and Israel. BIBREF7 extracted data instances from instagram that were associated with certain user accounts. BIBREF2 presented a very large corpus containing over 115k Wikipedia comments that include around 37k randomly sampled comments and the remaining 78k comments were selected from Wikipedia blocked comments. Most of existing hate speech detection models are feature-based and use features derived from the target text itself. BIBREF5 experimented with different classification methods including Bayesian Logistic Regression, Random Forest Decision Trees and SVMs, using features such as n-grams, reduced n-grams, dependency paths, and hateful terms. BIBREF0 proposed a logistic regression model using character n-gram features. BIBREF14 used the paragraph2vec for joint modeling of comments and words, then the generated embeddings were used as feature in a logistic regression model. BIBREF9 experimented with various syntactic, linguistic and distributional semantic features including word length, sentence length, part of speech tags, and embedding features, in order to improve performance of logistic regression classifiers. Recently, BIBREF17 surveyed current approaches for hate speech detection, which interestingly also called to attention on modeling context information for resolving difficult hate speech instances. Corpus Overview The Fox News User Comments corpus consists of 1528 annotated comments (435 labeled as hateful) that were posted by 678 different users in 10 complete news discussion threads in the Fox News website. The 10 threads were manually selected and represent popular discussion threads during August 2016. All of the comments included in these 10 threads were annotated. The number of comments in each of the 10 threads is roughly equal. Rich context information was kept for each comment, including its user screen name, the comments and their nested structure and the original news article. The data corpus along with annotation guidelines is posted on github. Annotation Guidelines Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful. Annotation Procedure We identified two native English speakers for annotating online user comments. The two annotators first discussed and practices before they started annotation. They achieved a surprisingly high Kappa score BIBREF18 of 0.98 on 648 comments from 4 threads. We think that thorough discussions in the training stage is the key for achieving this high inter-agreement. For those comments which annotators disagreed on, we label them as hateful as long as one annotator labeled them as hateful. Then one annotator continued to annotate the remaining 880 comments from the remaining six discussion threads. Characteristics in Fox News User Comments corpus Hateful comments in the Fox News User Comments Corpus is often subtle, creative and implicit. Therefore, context information is necessary in order to accurately identify such hate speech. The hatefulness of many comments depended on understanding their contexts. For instance, (3) mastersundholm: Just remember no trabjo no cervesa This comment is posted for the news "States moving to restore work requirements for food stamp recipients". This comment implies that Latino immigrants abuse the usage of food stamp policy, which is clearly a stereotyping. Many hateful comments use implicit and subtle language, which contain no clear hate indicating word or phrase. In order to recognize such hard cases, we hypothesize that neural net models are more suitable by capturing overall composite meanings of a comment. For instance, the following comment is a typical implicit stereotyping against women. (4) MarineAssassin: Hey Brianne - get in the kitchen and make me a samich. Chop Chop 11% of our annotated comments have more than 50 words each. In such long comments, the hateful indicators usually appear in a small region of a comment while the majority of the comment is neutral. For example, (5) TMmckay: I thought ...115 words... Too many blacks winning, must be racist and needs affirmative action to make whites equally win! Certain user screen names indicate hatefulness, which imply that comments posted by these users are likely to contain hate speech. In the following example, commie is a slur for communists. (6)nocommie11: Blah blah blah. Israel is the only civilized nation in the region to keep the unwashed masses at bay. Logistic Regression Models In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment. For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9 For character level n-grams, we extract character level bigrams, tri-grams and four-grams. For word level n-grams, we extract unigrams and bigrams. Linguistic Inquiry and Word Count, also called LIWC, has been proven useful for text analysis and classification BIBREF19 . In the LIWC dictionary, each word is labeled with several semantic labels. In our experiment, we use the LIWC 2015 dictionary which contain 125 semantic categories. Each word is converted into a 125 dimension LIWC vector, one dimension per semantic category. The LIWC feature vector for a comment or its context is a 125 dimension vector as well, which is the sum of all its words' LIWC vectors. NRC emotion lexicon contains a list of English words that were labeled with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and sentiment polarities (negative and positive) BIBREF20 . We use NRC emotion lexicon to capture emotion clues in text. Each word is converted into a 10 dimension emotion vector, corresponding to eight emotion types and two polarity labels. The emotion vector for a comment or its context is a 10 dimension vector as well, which is the sum of all its words' emotion vectors. As shown in table TABREF20 , given comment as the only input content, the combination of character n-grams, word n-grams, LIWC feature and NRC feature achieves the best performance. It shows that in addition to character level features, adding more features can improve hate speech detection performance. However, the improvement is limited. Compared with baseline model, the F1 score only improves 1.3%. In contrast, when context information was taken into account, the performance greatly improved. Specifically, after incorporating features extracted from the news title and username, the model performance was improved by around 4% in both F1 score and AUC score. This shows that using additional context based features in logistic regression models is useful for hate speech detection. Neural Network Models Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters. Comment is sent into a bi-directional LSTM with attention mechanism. BIBREF22 . News title and username are sent into a bi-directional LSTM. Note that we did not apply attention mechanism to the neural network models for username and news title because both types of context are relatively short and attention mechanism tends to be useful when text input is long. The three LSTM output layers are concatenated, then connected to a sigmoid layer, which outputs predictions. The number of hidden units in each LSTM used in our model is set to be 100. The recurrent dropout rate of LSTMs is set to 0.2. In addition, we use binary cross entropy as the loss function and a batch size of 128. The neural network models are trained for 30 epochs. As shown in table TABREF21 , given comment as the only input content, the bi-directional LSTM model with attention mechanism achieves the best performance. Note that the attention mechanism significantly improves the hate speech detection performance of the bi-directional LSTM model. We hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment, which is especially the case for long comments. Ensemble Models To study the difference of logistic regression model and neural network model and potentially get performance improvement, we will build and evaluate ensemble models. As shown in table TABREF24 , both ensemble models significantly improved hate speech detection performance. Figure FIGREF28 shows the system prediction results of comments that were labeled as hateful in the dataset. It can be seen that the two models perform differently. We further examined predicted comments and find that both types of models have unique strengths in identifying certain types of hateful comments. The feature-based logistic regression models are capable of making good use of character-level n-gram features, which are powerful in identifying hateful comments that contains OOV words, capitalized words or misspelled words. We provide two examples from the hateful comments that were only labeled by the logistic regression model: (7)kmawhmf:FBLM. Here FBLM means fuck Black Lives Matter. This hateful comment contains only character information which can exactly be made use of by our logistic regression model. (8)SFgunrmn: what a efen loon, but most femanazis are. This comment deliberately misspelled feminazi for femanazis, which is a derogatory term for feminists. It shows that logistic regression model is capable in dealing with misspelling. The LSTM with attention mechanism are suitable for identifying specific small regions indicating hatefulness in long comments. In addition, the neural net models are powerful in capturing implicit hateful language as well. The following are two hateful comment examples that were only identified by the neural net model: (9)freedomscout: @LarJass Many religions are poisonous to logic and truth, that much is true...and human beings still remain fallen human beings even they are Redeemed by the Sacrifice of Jesus Christ. So there's that. But the fallacies of thinking cannot be limited or attributed to religion but to error inherent in human motivation, the motivation to utter self-centeredness as fallen sinful human beings. Nearly all of the world's many religions are expressions of that utter sinful nature...Christianity and Judaism being the sole exceptions. This comment is expressing the stereotyping against religions which are not Christian or Judaism. The hatefulness is concentrated within the two bolded segments. (10)mamahattheridge: blacks Love being victims. In this comment, the four words themselves are not hateful at all. But when combined together, it is clearly hateful against black people. Evaluation We evaluate our model by 10 fold cross validation using our newly created Fox News User Comments Corpus. Both types of models use the exact same 10 folds of training data and test data. We report experimental results using multiple metrics, including accuracy, precision/recall/F1-score, and accuracy area under curve (AUC). Experimental Results Table TABREF20 shows the performance of logistic regression models. The first section of table TABREF20 shows the performance of logistic regression models using features extracted from a target comment only. The result shows that the logistic regression model was improved in every metric after adding both word-level n-gram features and lexicon derived features. However, the improvements are moderate. The second section shows the performance of logistic regression models using the four types of features extracted from both a target comment and its contextsThe result shows that the logistic regression model using features extracted from a comment and both types of context achieved the best performance and obtained improvements of 2.8% and 2.5% in AUC score and F1-score respectively. Table TABREF21 shows the performance of neural network models. The first section of table TABREF21 shows the performance of several neural network models that use comments as the only input. The model names are self-explanatory. We can see that the attention mechanism coupled with the bi-directional LSTM neural net greatly improved the online hate speech detection, by 5.7% in AUC score. The second section of table TABREF21 shows performance of the best neural net model (bi-directional LSTM with attention) after adding additional learning components that take context as input. The results show that adding username and news title can both improve model performance. Using news title gives the best F1 score while using both news title and username gives the best AUC score. Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions. We can see that both ensemble models further improved hate speech detection performance compared with using one model only and achieved the best classification performance. Compared with the logistic regression baseline, the Max Score Ensemble model improved the recall by more than 20% with a comparable precision and improved the F1 score by around 10%, in addition, the Average Score Ensemble model improved the AUC score by around 7%. Conclusion We demonstrated the importance of utilizing context information for online hate speech detection. We first presented a corpus of hateful speech consisting of full threads of online discussion posts. In addition, we presented two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information for improving hate speech detection performance. Furthermore, we show that ensemble models leveraging strengths of both types of models achieve the best performance for automatic online hate speech detection.
three parallel LSTM BIBREF21 layers
5c70fdd3d6b67031768d3e28336942e49bf9a500
5c70fdd3d6b67031768d3e28336942e49bf9a500_0
Q: How is human interaction consumed by the model? Text: Introduction Collaborative human-machine story-writing has had a recent resurgence of attention from the research community BIBREF0 , BIBREF1 . It represents a frontier for AI research; as a research community we have developed convincing NLP systems for some generative tasks like machine translation, but lag behind in creative areas like open-domain storytelling. Collaborative open-domain storytelling incorporates human interactivity for one of two aims: to improve human creativity via the aid of a machine, or to improve machine quality via the aid of a human. Previously existing approaches treat the former aim, and have shown that storytelling systems are not yet developed enough to help human writers. We attempt the latter, with the goal of investigating at what stage human collaboration is most helpful. gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations. System Overview Figure FIGREF3 shows a diagram of the interaction system. The dotted arrows represent optional user interactions. requires the user to enter a topic, such as “the not so haunted house”, and can optionally vary the diversity used in the Storyline Planner or the Story Writer. Diversity numbers correspond directly to softmax temperatures, which we restrict to a reasonable range, determined empirically. The settings are sent to the Storyline Planner module, which generates a storyline for the story in the form of a sequence of phrases as per the method of yao2018plan. Everything is then sent to the Story Writer, which will return three stories. enables advanced interactions with one story system of the user's choice. The Storyline Planner returns either one storyline phrase or many, and composes the final storyline out of the combination of phrases the system generated, the user has written, and edits the user has made. These are sent to the Story Writer, which returns either a single sentence or a full story as per user's request. The process is flexible and iterative. The user can choose how much or little content they want to provide, edit, or re-generate, and they can return to any step at any time until they decide they are done. To enable interactive flexibility, the system must handle open-domain user input. User input is lower-cased and tokenized to match the model training data via spaCy. Model output is naively detokenized via Moses BIBREF2 based on feedback from users that this was more natural. User input OOV handling is done via WordNet BIBREF3 by recursively searching for hypernyms and hyponyms (in that order) until either an in-vocabulary word is found or until a maximum distance from the initial word is reached. We additionally experimented with using cosine similarity to GloVe vectors BIBREF4 , but found that to be slower and not qualitatively better for this domain. Web Interface Figure FIGREF10 shows screenshots for both the cross-model and intra-model modes of interaction. Figure FIGREF10 shows that the cross-model mode makes clear the differences between different model generations for the same topic. Figure FIGREF10 shows the variety of interactions a user can take in intra-model interaction, and is annotated with an example-in-action. User inserted text is underlined in blue, generated text that has been removed by the user is in grey strike-through. The refresh symbol marks areas that the user re-generated to get a different sentence (presumably after being unhappy with the first result). As can be seen in this example, minor user involvement can result in a significantly better story. Model Design All models for both the Storyline Planner and Story Writer modules are conditional language models implemented with LSTMs based on merity2018regularizing. These are 3-stacked LSTMs that include weight-dropping, weight-tying, variable length back propagation with learning rate adjustment, and Averaged Stochastic Gradient Descent (ASGD). They are trained on the ROC dataset BIBREF5 , which after lowercasing and tokenization has a vocabulary of 38k. Storyline Phrases are extracted as in yao2018plan via the RAKE algorithm BIBREF6 which results in a slightly smaller Storyline vocabulary of 31k. The Storyline Planner does decoding via sampling to encourage creative exploration. The Story Writer has an option to use one or all three systems, all of which decode via beamsearch and are detailed below. The Title-to-Story system is a baseline, which generates directly from topic. The Plan-and-Write system adopts the static model in yao2018plan to use the storyline to supervise story-writing. Plan-and-Revise is a new system that combines the strengths of yao2018plan and holtzman2018learning. It supplements the Plan-and-Write model by training two discriminators on the ROC data and using them to re-rank the LSTM generations to prefer increased creativity and relevance. Thus the decoding objective of this system becomes INLINEFORM0 where INLINEFORM1 is the conditional language model probability of the LSTM, INLINEFORM2 is the discriminator scoring function, and INLINEFORM3 is the learned weight of that discriminator. At each timestep all live beam hypotheses are scored and re-ranked. Discriminator weights are learnt by minimizing Mean Squared Error on the difference between the scores of gold standard and generated story sentences. Experiments We experiment with six types of interaction: five variations created by restricting different capabilities of our system, and a sixth turn-taking baseline that mimics the interaction of the previous work BIBREF1 , BIBREF7 . We choose our experiments to address the research questions: What type of interaction is most engaging? Which type results in the best stories? Can a human tasked with correcting for certain weaknesses of a model successfully do so? The variations on interactions that we tested are: We expand experiment 5 to answer the question of whether a human-in-the-loop interactive system can address specific shortcomings of generated stories. We identify three types of weaknesses common to generation systems – Creativity, Relevance, and Causal & Temporal Coherence, and conduct experiments where the human is instructed to focus on improving specifically one of them. The targeted human improvement areas intentionally match the Plan-and-Revise discriminators, so that, if successful, the "human discriminator" data can assist in training the machine discriminators. All experiments (save experiment 2, which lets the user pick between models) use the Plan-and-Revise system. Details We recruit 30 Mechanical Turk workers per experiment (270 unique workers total) to complete story writing tasks with the system. We constrain them to ten minutes of work (five for writing and five for a survey) and provide them with a fixed topic to control this factor across experiments. They co-create a story and complete a questionnaire which asks them to self-report on their engagement, satisfaction, and perception of story quality. For the additional focused error-correction experiments, we instruct Turkers to try to improve the machine-generated stories with regard to the given aspect, under the same time constraints. As an incentive, they are given a small bonus if they are later judged to have succeeded. We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis. Conclusions and Future Work We have shown that all levels of human-computer collaboration improve story quality across all metrics, compared to a baseline computer-only story generation system. We have also shown that flexible interaction, which allows the user to return to edit earlier text, improves the specific metrics of creativity and causal-temporal coherence above previous rigid turn-taking approaches. We find that, as well as improving story quality, more interaction makes users more engaged and likely to use the system again. Users tasked with collaborating to improve a specific story quality were able to do so, as judged by independent readers. As the demo system has successfully used an ensemble of collaborative discriminators to improve the same qualities that untrained human users were able to improve even further, this suggests promising future research into human-collaborative stories as training data for new discriminators. It could be used both to strengthen existing discriminators and to develop novel ones, since discriminators are extensible to arbitrarily many story aspects. Acknowledgments We thank the anonymous reviewers for their feedback, as well as the members of the PLUS lab for their thoughts and iterative testing. This work is supported by Contract W911NF-15- 1-0543 with the US Defense Advanced Research Projects Agency (DARPA). Demo Video The three-minute video demonstrating the interaction capabilities of the system can be viewed at https://youtu.be/-hGd2399dnA. (Same video as linked in the paper footnote). Decoding Default diversity (Softmax Temperature) for Storyline Planner is 0.5, for Story Writer it is None (as beamsearch is used an thus can have but does not require a temperature). Beam size for all Story Writer models is 5. Additionally, Storyline Phrases are constrained to be unique (unless a user duplicates them), and Beamsearch is not normalized by length (both choices determined empirically). Training We follow the parameters used in yao2018plan and merity2018regularizing. Mechanical Turk Materials Following are examples of the materials used in doing Mechanical Turk User Studies. Figure FIGREF37 is an example of the All + Creative focused experiment for story-writing. The instructions per experiment differ across all, but the template is the same. Figure FIGREF38 is the survey for ranking stories across various metrics. This remains constant save that story order was shuffled every time to control for any effects of the order a story was read in.
displays three different versions of a story written by three distinct models for a human to compare, human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages