id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
b6fb72437e3779b0e523b9710e36b966c23a2a40
b6fb72437e3779b0e523b9710e36b966c23a2a40_0
Q: How many rules had to be defined? Text: Introduction Semantic parsing aims to map natural language questions to the logical forms of their underlying meanings, which can be regarded as programs and executed to yield answers, aka denotations BIBREF0 . In the past few years, neural network based semantic parsers have achieved promising performances BIBREF1 , however, their success is limited to the setting with rich supervision, which is costly to obtain. There have been recent attempts at low-resource semantic parsing, including data augmentation methods which are learned from a small number of annotated examples BIBREF2 , and methods for adapting to unseen domains while only being trained on annotated examples in other domains. This work investigates neural semantic parsing in a low-resource setting, in which case we only have our prior knowledge about a limited number of simple mapping rules, including a small amount of domain-independent word-level matching tables if necessary, but have no access to either annotated programs or execution results. Our key idea is to use these rules to collect modest question-programs pairs as the starting point, and then leverage automatically generated examples to improve the accuracy and generality of the model. This presents three challenges including how to generate examples in an efficient way, how to measure the quality of generated examples which might contain errors and noise, and how to train a semantic parser that makes robust predictions for examples covered by rules and generalizes well to uncovered examples. We address the aforementioned challenges with a framework consisting of three key components. The first component is a data generator. It includes a neural semantic parsing model, which maps a natural language question to a program, and a neural question generation model, which maps a program to a natural language question. We learn these two models in a back-translation paradigm using pseudo parallel examples, inspired by its big success on unsupervised neural machine translation BIBREF3 , BIBREF4 . The second component is a quality controller, which is used for filtering out noise and errors contained in the pseudo data. We construct a phrase table with frequent mapping patterns, therefore noise and errors with low frequency can be filtered out. A similar idea has been worked as posterior regularization in neural machine translation BIBREF5 , BIBREF6 . The third component is a meta learner. Instead of transferring a model pretrained with examples covered by rules to the generated examples, we leverage model-agnostic meta-learning BIBREF7 , an elegant meta-learning algorithm which has been successfully applied to a wide range of tasks including few-shot learning and adaptive control. We regard different data sources as different tasks, and use outputs of the quality controller for stable training. We test our approach on three tasks with different programs, including SQL (and SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and subject-predicate pairs over a large-scale knowledge graph BIBREF10 . The program for SQL queries for single-turn questions and subject-predicate pairs over knowledge graph is simple while the program for SQL queries for multi-turn questions have top-tier complexity among currently proposed tasks. Results show that our approach yields large improvements over rule-based systems, and incorporating different strategies incrementally improves the overall performance. On WikiSQL, our best performing system achieves execution accuracy of 72.7%, comparable to a strong system learned from denotations BIBREF11 with an accuracy of 74.8%. Problem Statement We focus on the task of executive semantic parsing. The goal is to map a natural language question/utterance INLINEFORM0 to a logical form/program INLINEFORM1 , which can be executed over a world INLINEFORM2 to obtain the correct answer INLINEFORM3 . We consider three tasks. The first task is single-turn table-based semantic parsing, in which case INLINEFORM0 is a self-contained question, INLINEFORM1 is a SQL query in the form of “SELECT agg col INLINEFORM2 WHERE col INLINEFORM3 = val INLINEFORM4 AND ...”, and INLINEFORM5 is a web table consisting of multiple rows and columns. We use WikiSQL BIBREF8 as the testbed for this task. The second task is multi-turn table-based semantic parsing. Compared to the first task, INLINEFORM6 could be a follow-up question, the meaning of which depends on the conversation history. Accordingly, INLINEFORM7 in this task supports additional operations that copy previous turn INLINEFORM8 to the current turn. We use SequentialQA BIBREF9 for evaluation. In the third task, we change INLINEFORM9 to a large-scale knowledge-graph (i.e. Freebase) and consider knowledge-based question answering for single-turn questions. We use SimpleQuestions BIBREF10 as the testbed, where the INLINEFORM10 is in the form of a simple INLINEFORM11 -calculus like INLINEFORM12 , and the generation of INLINEFORM13 is equivalent to the prediction of the predicate and the subject entity. We study the problem in a low-resource setting. In the training process, we don't have annotated logical forms INLINEFORM0 or execution results INLINEFORM1 . Instead, we have a collection of natural language questions for the task, a limited number of simple mapping rules based on our prior knowledge about the task, and may also have a small amount of domain-independent word-level matching tables if necessary. These rules are not perfect, with low coverage, and can even be incorrect for some situations. For instance, when predicting a SQL command in the first task, we have a prior knowledge that (1) WHERE values potentially have co-occurring words with table cells; (2) the words “more” and “greater” tend to be mapped to WHERE operator “ INLINEFORM2 ”; (3) within a WHERE clause, header and cell should be in the same column; and (4) the word “average” tends to be mapped to aggregator “avg”. Similarly, when predicting a INLINEFORM3 -calculus in the third task, the entity name might be present in the question, and among all the predicates connected to the entity, the predicate with maximum number of co-occurred words might be correct. We would like to study to what extent our model can achieve if we use rules as the starting point. Learning Algorithm We describe our approach for low-resource neural semantic parsing in this section. We propose to train a neural semantic parser using back-translation and meta-learning. The learning process is summarized in Algorithm FIGREF1 . We describe the three components in this section, namely back-translation, quality control, and meta-learning. Back-Translation Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logical form INLINEFORM1 , and a question generator, which maps INLINEFORM2 to INLINEFORM3 . The semantic parser works for the primary task, and the question generator mainly works for generating pseudo datapoints. We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises. This is based on the consideration that in an encoder-decoder model, the decoder is more sensitive to the data distribution than the encoder. We use datapoints from both models to train the semantic parser because a logical form is structural which follows a grammar, whose distribution is similar to the ground truth. Quality Controller Directly using generated datapoints as supervised training data is not desirable because those generated datapoints contain noises or errors. To address this, we follow the application of posterior regularization in neural machine translation BIBREF5 , and implement a dictionary-based discriminator which is used to measure the quality of a pseudo data. The basic idea is that although these generated datapoints are not perfect, the frequent patterns of the mapping between a phrase in INLINEFORM0 to a token in INLINEFORM1 are helpful in filtering out noise in the generated data with low frequency BIBREF6 . There are multiple ways to collect the phrase table information, such as using statistical phrase-level alignment algorithms like Giza++ or directly counting the co-occurrence of any question word and logical form token. We use the latter one in this work. Further details are described in the appendix. Meta-Learning A simple way to update the semantic parser is to merge the datapoints in hand and train a one-size-fits-all model BIBREF2 . However, this will hurt model's stability on examples covered by rules, and examples of the same task may vary widely BIBREF12 . Dealing with different types of examples requires the model to possess different abilities. For example, tackling examples uncovered by rules in WikiSQL requires the model to have the additional ability to map a column name to a totally different utterance, such as “country” to “nation”. Another simple solution is self-training BIBREF13 . One can train a model with examples covered by rules, and use the model as a teacher to make predictions on examples uncovered by rules and update the model on these predictions. However, self-training is somewhat tautological because the model is learned to make predictions which it already can produce. We learn the semantic parser with meta-learning, regarding learning from examples covered by rules or uncovered by rules as two (pseudo) tasks. Compared to the aforementioned strategies, the advantage of exploring meta-learning here is two-fold. First, we learn a specific model for each task, which provides guarantees about its stability on examples covered by rules. In the test phase, we can use the rule to detect which task an example belongs to, and use the corresponding task-specific model to make predictions. When dealing with examples covered by rules, we can either directly use rules to make predictions or use the updated model, depending on the accuracy of the learned model on the examples covered by rules on development set. Second, latent patterns of examples may vary widely in terms of whether or not they are covered by rules. Meta-learning is more desirable in this situation because it learns the model's ability to learn, improving model's versatility rather than mapping the latent patterns learned from datapoints in one distribution to datapoints in another distribution by force. Figure FIGREF1 is an illustration of data combination, self-training, and meta-learning. Meta-learning includes two optimizations: the learner that learns new tasks, and the meta-learner that trains the learner. In this work, the meta-learner is optimized by finding a good initialization that is highly adaptable. Specifically, we use model-agnostic meta-learning, MAML BIBREF7 , a powerful meta-learning algorithm with desirable properties including introducing no additional parameters and making no assumptions of the form of the model. In MAML, task-specific parameter INLINEFORM0 is initialized by INLINEFORM1 , and updated using gradient decent based on the loss function INLINEFORM2 of task INLINEFORM3 . In this work, the loss functions of two tasks are the same. The updated parameter INLINEFORM4 is then used to calculate the model's performance across tasks to update the parameter INLINEFORM5 . In this work, following the practical suggestions given by BIBREF17 , we update INLINEFORM6 in the inner-loop and regard the outputs of the quality controller as the input of both tasks. If we only have examples covered by rules, such as those used in the initialization phase, meta-learning learns to learn a good initial parameter that is evaluated by its usefulness on the examples from the same distribution. In the training phase, datapoints from both tasks are generated, and meta-learning learns to learn an initialization parameter which can be quickly and efficiently adapted to examples from both tasks. Experiment We conduct experiments on three tasks to test our approach, including generating SQL (or SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and predicting subject-predicate pairs over a knowledge graph BIBREF10 . We describe task definition, base models, experiments settings and empirical results for each task, respectively. Table-Based Semantic Parsing Given a natural language INLINEFORM0 and a table INLINEFORM1 with INLINEFORM2 columns and INLINEFORM3 rows as the input, the task is to output a SQL query INLINEFORM4 , which could be executed on table INLINEFORM5 to yield the correct answer of INLINEFORM6 . We conduct experiments on WikiSQL BIBREF8 , which provides 87,726 annotated question-SQL pairs over 26,375 web tables. In this work, we do not use either SQL queries or answers in the training process. We use execution accuracy as the evaluation metric, which measures the percentage of generated SQL queries that result in the correct answer. We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%. We implement a neural network modular approach as the base model, which includes different modules to predict different SQL constituents. This approach is based on the understanding of the SQL grammar in WikiSQL, namely “SELECT $agg $column WHERE $column $op $value (AND $column $op $value)*”, where tokens starting with “$” are the slots to be predicted BIBREF18 . In practice, modular approaches typically achieve higher accuracy than end-to-end learning approach. Specifically, at the first step we implement a sequential labeling module to detect WHERE values and link them to table cells. Advantages of starting from WHERE values include that WHERE values are less ambiguous compared to other slots, and that the number of WHERE clauses can be naturally detected. After that, for each WHERE value, we use the preceding and following contexts in the question to predict its WHERE column and the WHERE operator through two unidirectional LSTM. Column attention BIBREF18 is used for predicting a particular column. Similar LSTM-based classifiers are used to predict SELECT column and SELECT aggregator. According to whether the training data can be processed by our rules, we divide it into two parts: rule covered part and rule uncovered part. For the rule covered part we could get rule covered training data using our rules. For the rule uncovered part we could also get training data using the trained Base model we have, we refer to these data as self-inference training data. Furthermore, we could get more training data by back translation, we refer to these data as question-generation training data. For all the settings, the Base Model is initialized with rule covered training data. In Base + Self Training Method, we finetune the Base model with self-inference training data. In Base + Question Generation Method, we use question-generation training data to finetune our model. In Base + BT Method, we use both self-inference and question-generation data to finetune our model. In Base + BT + QC, we add our quality controller. In Base + BT + QC + MAML, we further add meta-learning. Results are given in Table TABREF5 . We can see that back-translation, quality control and MAML incrementally improves the accuracy. Question generation is better than self-training here because the logical form in WikiSQL is relatively simple, so the distribution of the sampled logical forms is similar to the original one. In the back-translation setting, generated examples come from both self-training and the question generation model. The model performs better than rules on rule-covered examples, and improves the accuracy on uncovered examples. Figure FIGREF12 shows the learning curves of the COLUMN prediction model with or without using MAML. The model using MAML has a better starting point during training, which reflects the effectiveness of the pre-trained parameter. Knowledge-Based Question Answering We test our approach on question answering over another genre of environment: knowledge graph consisting of subject-relation-object triples. Given a natural language question and a knowledge graph, the task aims to correctly answer the question with evidences from the knowledge graph. We do our study on SimpleQuestions BIBREF10 , which includes 108,442 simple questions, each of which is accompanied by a subject-relation-object triple. Questions are constructed in a way that subject and relation are mentioned in the question, and that object is the answer. The task requires predicting the entityId and the relation involved in the question. Our rule for KBQA is simple without using a curated mapping dictionary. First, we detect an entity from the question using strict string matching, with the constraint that only one entity from the KB has the same surface string and that the question contains only one entity. After that, we get the connected relations of the detected entity, and assign the relation as the one with maximum number of co-occurred words. The coverage of our rule on training set is 16.0%, with an accuracy of 97.3% for relation prediction. We follow BIBREF22 , and implement a KBQA pipeline consisting of three modules in this work. At the first step, we use a sequence labeling model, i.e. LSTM-CRF, to detect entity mention words in the question. After that, we use an entity linking model with BM25 built on Elasticsearch. Top-K ranked similar entities are retrieved as candidate list. Then, we get all the relations connected to entities in the candidate list as candidate relations, and use a relation prediction model, which is based on Match-LSTM BIBREF23 , to predict the relation. Finally, from all the entities connected to the predicted relation, we choose the one with highest BM25 score as the predicted entity. We use FB2M as the KB, which includes about 2 million triples. The settings are the same as those described in table-based semantic parsing. Results are given in Table TABREF10 , which are consistent with the numbers in WikiSQL. Using back-translation, quality control and MAML incrementally improves the accuracy, and our approach generalizes well to rule-uncovered examples. Conversational Table-Based Semantic Parsing We consider the task of conversational table-based semantic parsing in this part. Compared to single-turn table-based semantic parsing as described in subsection SECREF6 , the meaning of a natural language may also depends on questions of past turns, which is the common ellipsis and co-reference phenomena in conversational agents. Given a natural language question at the current turn, a web table, and previous turn questions in a conversation as the input, the task aims to generate a program (i.e. logical form), which can be executed on the table to obtain the correct answer of the current turn question. We conduct experiments on SequentialQA BIBREF9 which is derived from the WikiTableQuestions dataset BIBREF19 . It contains 6,066 question sequences covering 17,553 question-answer pairs. Each sequence includes 2.9 natural language questions on average. Different from WikiSQL which provides the correct logical form for each question, SequentialQA only annotates the correct answer. This dataset is also harder than the previous two, since it requires complex, highly compositional logical forms to get the answer. Existing approaches are evaluated by question answering accuracy, which measures whether the predicted answer is correct or not. The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation. Table TABREF23 shows the additional word-level mapping table used in SequentialQA. The coverage of our rule on training set is 75.5%, with an accuracy of 38.5%. We implement a modular approach on top of a grammar of derivation rules (actions) as the base model. Similar to BIBREF9 , our grammar consists of predefined actions used for predicting SELECT column, WHERE column, WHERE operator, WHERE value, and determining whether it is required to copy the entire action sequence of the previous turn questions. After encoding a question and previous turn questions into vectors, we first use a controller module to predict an action sequence consisting of slots, and then use specific modules to predict the argument of each slot. Similar to BIBREF9 , we use a recurrent structure as the backbone of each module and use the softmax layer for making prediction. The settings are the same as those described in table-based semantic parsing. From Table TABREF20 , we can see that question generation does not work well on this task. This is because the difficulty in generating sequential questions and complex target logical forms. Applying MAML to examples not coming from question generation performs best. We leave contextual question generation as a future work. Conclusion and Future Directions We present an approach to learn neural semantic parser from simple domain-independent rules, instead of annotated logical forms or denotations. Our approach starts from examples covered by rules, which are used to initialize a semantic parser and a question generator in a back-translation paradigm. Generated examples are measured and filtered based on statistic analysis, and then used with model-agnostic meta-learning, which guarantees model's accuracy and stability on rule-covered examples, and acquires the versatility to generalize well on rule-uncovered examples. We conduct experiments on three datasets for table-based and knowledge-based question answering tasks. Results show that incorporating different strategies incrementally improves the performance. Our best model on WikiSQL achieves comparable accuracy to the system learned from denotation. In the future, we plan to focus on more complex logical forms.
WikiSQL - 2 rules (SELECT, WHERE) SimpleQuestions - 1 rule SequentialQA - 3 rules (SELECT, WHERE, COPY)
e6469135e0273481cf11a6c737923630bc7ccfca
e6469135e0273481cf11a6c737923630bc7ccfca_0
Q: What datasets are used in this paper? Text: Introduction Semantic parsing aims to map natural language questions to the logical forms of their underlying meanings, which can be regarded as programs and executed to yield answers, aka denotations BIBREF0 . In the past few years, neural network based semantic parsers have achieved promising performances BIBREF1 , however, their success is limited to the setting with rich supervision, which is costly to obtain. There have been recent attempts at low-resource semantic parsing, including data augmentation methods which are learned from a small number of annotated examples BIBREF2 , and methods for adapting to unseen domains while only being trained on annotated examples in other domains. This work investigates neural semantic parsing in a low-resource setting, in which case we only have our prior knowledge about a limited number of simple mapping rules, including a small amount of domain-independent word-level matching tables if necessary, but have no access to either annotated programs or execution results. Our key idea is to use these rules to collect modest question-programs pairs as the starting point, and then leverage automatically generated examples to improve the accuracy and generality of the model. This presents three challenges including how to generate examples in an efficient way, how to measure the quality of generated examples which might contain errors and noise, and how to train a semantic parser that makes robust predictions for examples covered by rules and generalizes well to uncovered examples. We address the aforementioned challenges with a framework consisting of three key components. The first component is a data generator. It includes a neural semantic parsing model, which maps a natural language question to a program, and a neural question generation model, which maps a program to a natural language question. We learn these two models in a back-translation paradigm using pseudo parallel examples, inspired by its big success on unsupervised neural machine translation BIBREF3 , BIBREF4 . The second component is a quality controller, which is used for filtering out noise and errors contained in the pseudo data. We construct a phrase table with frequent mapping patterns, therefore noise and errors with low frequency can be filtered out. A similar idea has been worked as posterior regularization in neural machine translation BIBREF5 , BIBREF6 . The third component is a meta learner. Instead of transferring a model pretrained with examples covered by rules to the generated examples, we leverage model-agnostic meta-learning BIBREF7 , an elegant meta-learning algorithm which has been successfully applied to a wide range of tasks including few-shot learning and adaptive control. We regard different data sources as different tasks, and use outputs of the quality controller for stable training. We test our approach on three tasks with different programs, including SQL (and SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and subject-predicate pairs over a large-scale knowledge graph BIBREF10 . The program for SQL queries for single-turn questions and subject-predicate pairs over knowledge graph is simple while the program for SQL queries for multi-turn questions have top-tier complexity among currently proposed tasks. Results show that our approach yields large improvements over rule-based systems, and incorporating different strategies incrementally improves the overall performance. On WikiSQL, our best performing system achieves execution accuracy of 72.7%, comparable to a strong system learned from denotations BIBREF11 with an accuracy of 74.8%. Problem Statement We focus on the task of executive semantic parsing. The goal is to map a natural language question/utterance INLINEFORM0 to a logical form/program INLINEFORM1 , which can be executed over a world INLINEFORM2 to obtain the correct answer INLINEFORM3 . We consider three tasks. The first task is single-turn table-based semantic parsing, in which case INLINEFORM0 is a self-contained question, INLINEFORM1 is a SQL query in the form of “SELECT agg col INLINEFORM2 WHERE col INLINEFORM3 = val INLINEFORM4 AND ...”, and INLINEFORM5 is a web table consisting of multiple rows and columns. We use WikiSQL BIBREF8 as the testbed for this task. The second task is multi-turn table-based semantic parsing. Compared to the first task, INLINEFORM6 could be a follow-up question, the meaning of which depends on the conversation history. Accordingly, INLINEFORM7 in this task supports additional operations that copy previous turn INLINEFORM8 to the current turn. We use SequentialQA BIBREF9 for evaluation. In the third task, we change INLINEFORM9 to a large-scale knowledge-graph (i.e. Freebase) and consider knowledge-based question answering for single-turn questions. We use SimpleQuestions BIBREF10 as the testbed, where the INLINEFORM10 is in the form of a simple INLINEFORM11 -calculus like INLINEFORM12 , and the generation of INLINEFORM13 is equivalent to the prediction of the predicate and the subject entity. We study the problem in a low-resource setting. In the training process, we don't have annotated logical forms INLINEFORM0 or execution results INLINEFORM1 . Instead, we have a collection of natural language questions for the task, a limited number of simple mapping rules based on our prior knowledge about the task, and may also have a small amount of domain-independent word-level matching tables if necessary. These rules are not perfect, with low coverage, and can even be incorrect for some situations. For instance, when predicting a SQL command in the first task, we have a prior knowledge that (1) WHERE values potentially have co-occurring words with table cells; (2) the words “more” and “greater” tend to be mapped to WHERE operator “ INLINEFORM2 ”; (3) within a WHERE clause, header and cell should be in the same column; and (4) the word “average” tends to be mapped to aggregator “avg”. Similarly, when predicting a INLINEFORM3 -calculus in the third task, the entity name might be present in the question, and among all the predicates connected to the entity, the predicate with maximum number of co-occurred words might be correct. We would like to study to what extent our model can achieve if we use rules as the starting point. Learning Algorithm We describe our approach for low-resource neural semantic parsing in this section. We propose to train a neural semantic parser using back-translation and meta-learning. The learning process is summarized in Algorithm FIGREF1 . We describe the three components in this section, namely back-translation, quality control, and meta-learning. Back-Translation Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logical form INLINEFORM1 , and a question generator, which maps INLINEFORM2 to INLINEFORM3 . The semantic parser works for the primary task, and the question generator mainly works for generating pseudo datapoints. We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises. This is based on the consideration that in an encoder-decoder model, the decoder is more sensitive to the data distribution than the encoder. We use datapoints from both models to train the semantic parser because a logical form is structural which follows a grammar, whose distribution is similar to the ground truth. Quality Controller Directly using generated datapoints as supervised training data is not desirable because those generated datapoints contain noises or errors. To address this, we follow the application of posterior regularization in neural machine translation BIBREF5 , and implement a dictionary-based discriminator which is used to measure the quality of a pseudo data. The basic idea is that although these generated datapoints are not perfect, the frequent patterns of the mapping between a phrase in INLINEFORM0 to a token in INLINEFORM1 are helpful in filtering out noise in the generated data with low frequency BIBREF6 . There are multiple ways to collect the phrase table information, such as using statistical phrase-level alignment algorithms like Giza++ or directly counting the co-occurrence of any question word and logical form token. We use the latter one in this work. Further details are described in the appendix. Meta-Learning A simple way to update the semantic parser is to merge the datapoints in hand and train a one-size-fits-all model BIBREF2 . However, this will hurt model's stability on examples covered by rules, and examples of the same task may vary widely BIBREF12 . Dealing with different types of examples requires the model to possess different abilities. For example, tackling examples uncovered by rules in WikiSQL requires the model to have the additional ability to map a column name to a totally different utterance, such as “country” to “nation”. Another simple solution is self-training BIBREF13 . One can train a model with examples covered by rules, and use the model as a teacher to make predictions on examples uncovered by rules and update the model on these predictions. However, self-training is somewhat tautological because the model is learned to make predictions which it already can produce. We learn the semantic parser with meta-learning, regarding learning from examples covered by rules or uncovered by rules as two (pseudo) tasks. Compared to the aforementioned strategies, the advantage of exploring meta-learning here is two-fold. First, we learn a specific model for each task, which provides guarantees about its stability on examples covered by rules. In the test phase, we can use the rule to detect which task an example belongs to, and use the corresponding task-specific model to make predictions. When dealing with examples covered by rules, we can either directly use rules to make predictions or use the updated model, depending on the accuracy of the learned model on the examples covered by rules on development set. Second, latent patterns of examples may vary widely in terms of whether or not they are covered by rules. Meta-learning is more desirable in this situation because it learns the model's ability to learn, improving model's versatility rather than mapping the latent patterns learned from datapoints in one distribution to datapoints in another distribution by force. Figure FIGREF1 is an illustration of data combination, self-training, and meta-learning. Meta-learning includes two optimizations: the learner that learns new tasks, and the meta-learner that trains the learner. In this work, the meta-learner is optimized by finding a good initialization that is highly adaptable. Specifically, we use model-agnostic meta-learning, MAML BIBREF7 , a powerful meta-learning algorithm with desirable properties including introducing no additional parameters and making no assumptions of the form of the model. In MAML, task-specific parameter INLINEFORM0 is initialized by INLINEFORM1 , and updated using gradient decent based on the loss function INLINEFORM2 of task INLINEFORM3 . In this work, the loss functions of two tasks are the same. The updated parameter INLINEFORM4 is then used to calculate the model's performance across tasks to update the parameter INLINEFORM5 . In this work, following the practical suggestions given by BIBREF17 , we update INLINEFORM6 in the inner-loop and regard the outputs of the quality controller as the input of both tasks. If we only have examples covered by rules, such as those used in the initialization phase, meta-learning learns to learn a good initial parameter that is evaluated by its usefulness on the examples from the same distribution. In the training phase, datapoints from both tasks are generated, and meta-learning learns to learn an initialization parameter which can be quickly and efficiently adapted to examples from both tasks. Experiment We conduct experiments on three tasks to test our approach, including generating SQL (or SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and predicting subject-predicate pairs over a knowledge graph BIBREF10 . We describe task definition, base models, experiments settings and empirical results for each task, respectively. Table-Based Semantic Parsing Given a natural language INLINEFORM0 and a table INLINEFORM1 with INLINEFORM2 columns and INLINEFORM3 rows as the input, the task is to output a SQL query INLINEFORM4 , which could be executed on table INLINEFORM5 to yield the correct answer of INLINEFORM6 . We conduct experiments on WikiSQL BIBREF8 , which provides 87,726 annotated question-SQL pairs over 26,375 web tables. In this work, we do not use either SQL queries or answers in the training process. We use execution accuracy as the evaluation metric, which measures the percentage of generated SQL queries that result in the correct answer. We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%. We implement a neural network modular approach as the base model, which includes different modules to predict different SQL constituents. This approach is based on the understanding of the SQL grammar in WikiSQL, namely “SELECT $agg $column WHERE $column $op $value (AND $column $op $value)*”, where tokens starting with “$” are the slots to be predicted BIBREF18 . In practice, modular approaches typically achieve higher accuracy than end-to-end learning approach. Specifically, at the first step we implement a sequential labeling module to detect WHERE values and link them to table cells. Advantages of starting from WHERE values include that WHERE values are less ambiguous compared to other slots, and that the number of WHERE clauses can be naturally detected. After that, for each WHERE value, we use the preceding and following contexts in the question to predict its WHERE column and the WHERE operator through two unidirectional LSTM. Column attention BIBREF18 is used for predicting a particular column. Similar LSTM-based classifiers are used to predict SELECT column and SELECT aggregator. According to whether the training data can be processed by our rules, we divide it into two parts: rule covered part and rule uncovered part. For the rule covered part we could get rule covered training data using our rules. For the rule uncovered part we could also get training data using the trained Base model we have, we refer to these data as self-inference training data. Furthermore, we could get more training data by back translation, we refer to these data as question-generation training data. For all the settings, the Base Model is initialized with rule covered training data. In Base + Self Training Method, we finetune the Base model with self-inference training data. In Base + Question Generation Method, we use question-generation training data to finetune our model. In Base + BT Method, we use both self-inference and question-generation data to finetune our model. In Base + BT + QC, we add our quality controller. In Base + BT + QC + MAML, we further add meta-learning. Results are given in Table TABREF5 . We can see that back-translation, quality control and MAML incrementally improves the accuracy. Question generation is better than self-training here because the logical form in WikiSQL is relatively simple, so the distribution of the sampled logical forms is similar to the original one. In the back-translation setting, generated examples come from both self-training and the question generation model. The model performs better than rules on rule-covered examples, and improves the accuracy on uncovered examples. Figure FIGREF12 shows the learning curves of the COLUMN prediction model with or without using MAML. The model using MAML has a better starting point during training, which reflects the effectiveness of the pre-trained parameter. Knowledge-Based Question Answering We test our approach on question answering over another genre of environment: knowledge graph consisting of subject-relation-object triples. Given a natural language question and a knowledge graph, the task aims to correctly answer the question with evidences from the knowledge graph. We do our study on SimpleQuestions BIBREF10 , which includes 108,442 simple questions, each of which is accompanied by a subject-relation-object triple. Questions are constructed in a way that subject and relation are mentioned in the question, and that object is the answer. The task requires predicting the entityId and the relation involved in the question. Our rule for KBQA is simple without using a curated mapping dictionary. First, we detect an entity from the question using strict string matching, with the constraint that only one entity from the KB has the same surface string and that the question contains only one entity. After that, we get the connected relations of the detected entity, and assign the relation as the one with maximum number of co-occurred words. The coverage of our rule on training set is 16.0%, with an accuracy of 97.3% for relation prediction. We follow BIBREF22 , and implement a KBQA pipeline consisting of three modules in this work. At the first step, we use a sequence labeling model, i.e. LSTM-CRF, to detect entity mention words in the question. After that, we use an entity linking model with BM25 built on Elasticsearch. Top-K ranked similar entities are retrieved as candidate list. Then, we get all the relations connected to entities in the candidate list as candidate relations, and use a relation prediction model, which is based on Match-LSTM BIBREF23 , to predict the relation. Finally, from all the entities connected to the predicted relation, we choose the one with highest BM25 score as the predicted entity. We use FB2M as the KB, which includes about 2 million triples. The settings are the same as those described in table-based semantic parsing. Results are given in Table TABREF10 , which are consistent with the numbers in WikiSQL. Using back-translation, quality control and MAML incrementally improves the accuracy, and our approach generalizes well to rule-uncovered examples. Conversational Table-Based Semantic Parsing We consider the task of conversational table-based semantic parsing in this part. Compared to single-turn table-based semantic parsing as described in subsection SECREF6 , the meaning of a natural language may also depends on questions of past turns, which is the common ellipsis and co-reference phenomena in conversational agents. Given a natural language question at the current turn, a web table, and previous turn questions in a conversation as the input, the task aims to generate a program (i.e. logical form), which can be executed on the table to obtain the correct answer of the current turn question. We conduct experiments on SequentialQA BIBREF9 which is derived from the WikiTableQuestions dataset BIBREF19 . It contains 6,066 question sequences covering 17,553 question-answer pairs. Each sequence includes 2.9 natural language questions on average. Different from WikiSQL which provides the correct logical form for each question, SequentialQA only annotates the correct answer. This dataset is also harder than the previous two, since it requires complex, highly compositional logical forms to get the answer. Existing approaches are evaluated by question answering accuracy, which measures whether the predicted answer is correct or not. The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation. Table TABREF23 shows the additional word-level mapping table used in SequentialQA. The coverage of our rule on training set is 75.5%, with an accuracy of 38.5%. We implement a modular approach on top of a grammar of derivation rules (actions) as the base model. Similar to BIBREF9 , our grammar consists of predefined actions used for predicting SELECT column, WHERE column, WHERE operator, WHERE value, and determining whether it is required to copy the entire action sequence of the previous turn questions. After encoding a question and previous turn questions into vectors, we first use a controller module to predict an action sequence consisting of slots, and then use specific modules to predict the argument of each slot. Similar to BIBREF9 , we use a recurrent structure as the backbone of each module and use the softmax layer for making prediction. The settings are the same as those described in table-based semantic parsing. From Table TABREF20 , we can see that question generation does not work well on this task. This is because the difficulty in generating sequential questions and complex target logical forms. Applying MAML to examples not coming from question generation performs best. We leave contextual question generation as a future work. Conclusion and Future Directions We present an approach to learn neural semantic parser from simple domain-independent rules, instead of annotated logical forms or denotations. Our approach starts from examples covered by rules, which are used to initialize a semantic parser and a question generator in a back-translation paradigm. Generated examples are measured and filtered based on statistic analysis, and then used with model-agnostic meta-learning, which guarantees model's accuracy and stability on rule-covered examples, and acquires the versatility to generalize well on rule-uncovered examples. We conduct experiments on three datasets for table-based and knowledge-based question answering tasks. Results show that incorporating different strategies incrementally improves the performance. Our best model on WikiSQL achieves comparable accuracy to the system learned from denotation. In the future, we plan to focus on more complex logical forms.
WikiSQL, SimpleQuestions, SequentialQA
06202ab8b28dcf3991523cf163b8844b42b9fc99
06202ab8b28dcf3991523cf163b8844b42b9fc99_0
Q: How much labeled data is available for these two languages? Text: Introduction Named Entity Recognition (NER) is a classification task that identifies words in a text that refer to entities (such as dates, person, organization and location names). It is a core task of natural language processing and a component for many downstream applications like search engines, knowledge graphs and personal assistants. For high-resource languages like English, this is a well-studied problem with complex state-of-the-art systems reaching close to or above 90% F1-score on the standard datasets CoNLL03 BIBREF0 and Ontonotes BIBREF1. In recent years, research has been extended to a larger pool of languages including those of developing countries BIBREF2, BIBREF3, BIBREF4, BIBREF5. Often, for these languages (like Hausa and Yorùbá studied here), there exists a large population with access to digital devices and internet (and therefore digital text), but natural language processing (NLP) tools do not support them. One key reason is the absence of labeled training data required to train these systems. While manually labeled, gold-standard data is often only available in small quantities, it tends to be much easier to obtain large amounts of unlabeled text. Distant and weak supervision methods can then be used to create labeled data in a (semi-) automatic way. Using context BIBREF6, BIBREF7, external knowledge and resources BIBREF8, BIBREF9, expert rules BIBREF10, BIBREF11 or self-training BIBREF12, BIBREF13, a corpus or dataset can be labeled quickly and cheaply. Additionally, a variety of noise-handling methods have been proposed to circumvent the negative effects that errors in this automatic annotation might have on the performance of a machine learning classifier. In this work, we study two methods of distant supervision for NER: Automatic annotation rules and matching of lists of entities from an external knowledge source. While distant supervision has been successfully used for high resource languages, it is not straight forward that these also work in low-resource settings where the amount of available external information might be much lower. The knowledge graph of Wikidata e.g. contains 4 million person names in English while only 32 thousand such names are available in Yorùbá, many of which are Western names. Orthogonally to distant supervision, the pre-training of word embeddings is a key component for training many neural NLP models. A vector representation for words is built in an unsupervised fashion, i.e. on unlabeled text. Standard embedding techniques include Word2Vec BIBREF14, GloVe BIBREF15 and FastText BIBREF16. In the last two years, contextualized word embeddings have been proposed BIBREF17, BIBREF18, BIBREF19. At the cost of having a much larger model size, these vector representations take the context of words into account and have been shown to outperform other embeddings in many tasks. In this study, we evaluate both types of representations. The key questions we are interested in this paper are: How do NER models perform for Hausa and Yorùbá, two languages from developing countries? Are distant-supervision techniques relying on external information also useful in low-resource settings? How do simple and contextual word embeddings trade-off in model size and performance? Background & Methods ::: Languages Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan. Hausa has several dialects but the one regarded as standard Hausa is the Kananci spoken in the ancient city of Kano in Nigeria. Kananci is the dialect popularly used in many local (e.g VON news) and international news media such as BBC, VOA, DW and Radio France Internationale. Hausa is a tone language but the tones are often ignored in writings, the language is written in a modified Latin alphabet. Despite the popularity of Hausa as an important regional language in Africa and it's popularity in news media, it has very little or no labelled data for common NLP tasks such as text classification, named entity recognition and question answering. Yorùbá language is the third most spoken indigenous language in Africa after Swahilli and Hausa with over 35 million native speakers BIBREF20. The language is native to the South-western part of Nigeria and the Southern part of Benin, and it is also spoken in other countries like Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil. Yorùbá has several dialects but the written language has been standardized by the 1974 Joint Consultative Committee on Education BIBREF21, it has 25 letters without the Latin characters (c, q, v, x and z) and with additional characters (ẹ, gb, ṣ , ọ). Yorùbá is a tone language and the tones are represented as diacritics in written text, there are three tones in Yorùbá namely low ( \), mid (“$-$”) and high ($/$). The mid tone is usually ignored in writings. Often time articles written online including news articles like BBC and VON ignore diacritics. Ignoring diacritics makes it difficult to identify or pronounce words except they are in a context. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) will be mapped to owo without diacritics. Similar to the Hausa language, there are few or no labelled datasets for NLP tasks. Background & Methods ::: Datasets & Embeddings The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered from news sites, forums, weblogs, Wikipedia articles and twitter messages. We use a split of 10k training and 1k test instances. Due to the Hausa data not being publicly available at the time of writing, we could only perform a limited set of experiments on it. The Yorùbá NER data used in this work is the annotated corpus of Global Voices news articles recently released by BIBREF22. The dataset consists of 1,101 sentences (26k tokens) divided into 709 training sentences, 113 validation sentences and 279 test sentences based on 65%/10%/25% split ratio. The named entities in the dataset are personal names (PER), organization (ORG), location (LOC) and date & time (DATE). All other tokens are assigned a tag of "O". For the Yorùbá NER training, we make use of Yorùbá FastText embeddings BIBREF22 and multilingual-BERT that was trained on 104 languages including Yorùbá. Instead of the original FastText embeddings BIBREF16, we chose FastText embeddings trained on a multi-domain and high-quality dataset BIBREF22 because it gave better word similarity scores. Background & Methods ::: Distant and Weak Supervision In this work, we rely on two sources of distant supervision chosen for its ease of application: Rules allow to apply the knowledge of domain experts without the manual effort of labeling each instance. They are especially suited for entities that follow specific patterns, like time phrases in text (see also BIBREF23). We use them for the DATE entity. In Yoruba, date expressions are written with the keywords of “ọj” (day), “oṣù” (month), and “ọdn” (year). Similarly, time expressions are written with keywords such as “wákàtí” (hour), “ìṣjú (minute) and “ìṣjú-aaya (seconds). Relative date and time expressions are also written with keywords “ḷodn” (in the year), “loṣù” (in the month), “lọs” (in the week), “lọj” (in the day). An example of a date expression is: “8th of December, 2018” in Yorùbá translates to “ọj 8 oṣù Ọp, ọdún 2018” Lists of Entities can be obtained from a variety of sources like gazetteers, dictionaries, phone books, census data and Wikipedia categories BIBREF24. In recent years, knowledge bases like Freebase and Wikidata have become another option to retrieve entity lists in a structured way. An entity list is created by extracting all names of that type from a knowledge source (e.g. all person names from Wikidata). If a word or token from the unlabeled text matches an entry in an entity list, it is assigned the corresponding label. Experts can add heuristics to this automatic labeling that improve the matching BIBREF25. These include e.g. normalizing the grammatical form of words or filtering common false positives. Another popular method for low-resource NER is the use of cross-lingual information BIBREF26. Alternatives to distant supervision are crowd-sourcing BIBREF27 and non-expert annotations BIBREF28. Background & Methods ::: Learning With Noisy Labels The labels obtained through distant and weak supervision methods tend to contain a high amount of errors. In the Food101N dataset BIBREF29 around 20% of the automatically obtained labels are incorrect while for Clothing1M BIBREF30 the noise rate is more than 60%. Learning with this additional, noisily labeled data can result in lower classification performance compared to just training on a small set of clean labels (cf. e.g. BIBREF31). A variety of techniques have been proposed to handle label noise like modelling the underlying noise process BIBREF32 and filtering noisy instances BIBREF33, BIBREF34. BIBREF35 gives an in-depth introduction into this field and BIBREF36 survey more recent approaches, focusing on the vision domain. In this work, we experiment with three noise handling techniques. The approach by BIBREF37 estimates a noise channel using the EM algorithm. It treats all labels as possibly noisy and does not distinguish between a clean and a noisy part of the data. In contrast, the method by BIBREF38 leverages the existence of a small set of gold standard labels, something that - in our experience - is often available even in low resource settings. Having such a small set of clean labels is beneficial both for the main model itself as well as for the noise handling technique. Both approaches model the relationship between clean and noisy labels using a confusion matrix. This allows adapting the noisy to the clean label distribution during training. For a setting with 5 labels, it only requires $5^2=25$ additional parameters to estimate which could be beneficial when only few training data is available. The technique by BIBREF39 (adapted to NER by BIBREF38) learns a more complex neural network to clean the noisy labels before training with them. It also takes the features into account when cleaning the noise and it might, therefore, be able to model more complex noise processes. All three techniques can be easily added to the existing standard neural network architectures for NER. Models & Experimental Settings Hausa Distant supervision on Hausa was performed using lists of person names extracted from Wikipedia data. Since we had limited access to the data, we tested a simplified binary NER-tagging setting (PERSON-tags only). As a base model, we used a Bi-LSTM model developed for Part-of-Speech tagging BIBREF40. For noise handling, we apply the Noise Channel model by BIBREF37. Yorùbá For Yorùbá, the entity lists were created by extracting person, location and organization entities from Wikidata in English and Yorùbá. Additionally, a list of person names in Nigeria was obtained from a Yorùbá Name website (8,365 names) and list of popular Hausa, Igbo, Fulani and Yorùbá people on Wikipedia (in total 9,241 names). As manual heuristic, a minimum name length of 2 was set for extraction of PER (except for Nigerian names), LOC and ORG. The Nigerian names were set to include names with a minimum length of 3. For the DATE label, a native Yorùbá speaker wrote some annotation rules using 11 “date keywords” (“ọj”, “ọs”, “os”, “ọdn”, “wákàtí” , “ḷodn”, “ḷodn-un”, “ọdn-un” “lọs” , “lọj”, “ aago”) following these two criteria: (1) A token is a date keyword or follows a date keyword in a sequence. (2) A token is a digit. For Yorùbá, we evaluate four settings with different amounts of clean data, namely 1k, 2k, 4k and the full dataset. As distantly supervised data with noisy labels, the full dataset is used. Additionally, 19,559 words from 18 articles of the Global News Corpus (different from the articles in the training corpus) were automatically annotated. The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions. For noise handling, we experiment with the Confusion Matrix model by BIBREF38 and the Cleaning model by BIBREF39. We repeat all the Bi-LSTM experiments 20 times and report the average F1-score (following the approach by BIBREF41) and the standard error. The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end. This has been shown to give better performance than using word features extracted from BERT to train a classifier BIBREF19. The evaluation result is obtained as an average of 5 runs, we report the F1-score and the standard error in the result section. Results The results for Hausa are given in Table TABREF14. Training with a mix of 50% clean and 50% distantly-supervised data performs 15 F1-score points below using the whole 100% clean data which is to be expected due to the lower quality of the distantly-supervised labels. Using the Noise Channel closes half of this gap. Due to the limited availability of the dataset, we could unfortunately not investigate this further, but it shows already the benefits that are possible through noise-handling. An evaluation of the distant supervision for Yorùbá is given in Table TABREF14. The quality of the automatically annotated labels differs between the classes. Locations perform better than person and organization names, probably due to locations being less diverse and better covered in Wikidata. With simple date rules, we obtain already a 48% F1-score. This shows the importance of leveraging the knowledge of native speakers in automatic annotations. Overall a decent annotation can be obtained by the distant supervision and it even outperforms some of the actual machine learning models in the low-resource setting. Table TABREF14 compares using only Wikidata as data source versus adding additional, manually obtained lists of person names. While adding a list of Yorùbá names only improves recall slightly, the integration of Nigerian names helps to boost recall by 13 points. The experimental results for Yorùbá are given in Figure FIGREF11. The setting differs from the experiments with Hausa in that there is a small clean training set and additional, distantly-supervised data. For the Bi-LSTM model, adding distantly-supervised labels always helps. In the low-resource settings with 1k and 2k labeled data, it more than doubles the performance. Handling the noise in the distant supervision can result in slight improvements. The noise-cleaning approach struggles somewhat while the confusion matrix architecture does give better results in the majority of the scenarios. Training on 5k labeled data with distantly supervised data and noise handling, one can obtain a performance close to using the full 17k manually labeled token. The Bi-LSTM model has 1.50 million parameters (1.53 million for the cleaning model), while BERT has 110 million parameters. There is a clear trade-off between model size and performance. The BERT model is 70 times larger and obtains consistently better results due to its more complex, contextual embeddings pretrained on more data. Still, the F1-score also drops nearly half for the BERT model in the 1k setting compared to the full dataset. For 1k and 2k labeled data, the distant supervision helps to improve the model's performance. However, once the model trained only on clean data reaches a higher F1-score than the distant supervision technique, the model trained on clean and distantly-supervised data deteriorates. This suggests that the BERT model overfits too much on the noise in the distant supervision. Conclusion In this study, we analysed distant supervision techniques and label-noise handling for NER in Hausa and Yorùbá, two languages from developing countries. We showed that they can be successfully leveraged in a realistic low-resource scenario to double a classifier's performance. If model size is not a constraint, the more complex BERT model clearly outperforms the smaller Bi-LSTM architecture. Nevertheless, there is still a large gap between the best performing model on Yorùbá with 66 F1-score and the state-of-the-art in English around 90. We see several interesting follow-ups to these evaluations. In the future, we want to evaluate if noise handling methods can also allow the more complex BERT model to benefit from distant supervision. Regarding the model complexity, it would be interesting to experiment with more compact models like DistilBERT BIBREF42 that reach a similar performance with a smaller model size for high-resource settings. In general, we want to investigate more in-depth the trade-offs between model complexity and trainability in low-resource scenarios. Acknowledgments The experiments on Hausa were possible thanks to the collaboration with Florian Metze and CMU as part of the LORELEI project. Gefördert durch die Deutsche Forschungsgemeinschaft (DFG) – Projektnummer 232722074 – SFB 1102 / Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 232722074 – SFB 1102, the EU-funded Horizon 2020 projects ROXANNE under grant agreement No. 833635 and COMPRISE (http://www.compriseh2020.eu/) under grant agreement No. 3081705.
10k training and 1k test, 1,101 sentences (26k tokens)
271019168ed3a2b0ef5e3780b48a1ebefc562b57
271019168ed3a2b0ef5e3780b48a1ebefc562b57_0
Q: What was performance of classifiers before/after using distant supervision? Text: Introduction Named Entity Recognition (NER) is a classification task that identifies words in a text that refer to entities (such as dates, person, organization and location names). It is a core task of natural language processing and a component for many downstream applications like search engines, knowledge graphs and personal assistants. For high-resource languages like English, this is a well-studied problem with complex state-of-the-art systems reaching close to or above 90% F1-score on the standard datasets CoNLL03 BIBREF0 and Ontonotes BIBREF1. In recent years, research has been extended to a larger pool of languages including those of developing countries BIBREF2, BIBREF3, BIBREF4, BIBREF5. Often, for these languages (like Hausa and Yorùbá studied here), there exists a large population with access to digital devices and internet (and therefore digital text), but natural language processing (NLP) tools do not support them. One key reason is the absence of labeled training data required to train these systems. While manually labeled, gold-standard data is often only available in small quantities, it tends to be much easier to obtain large amounts of unlabeled text. Distant and weak supervision methods can then be used to create labeled data in a (semi-) automatic way. Using context BIBREF6, BIBREF7, external knowledge and resources BIBREF8, BIBREF9, expert rules BIBREF10, BIBREF11 or self-training BIBREF12, BIBREF13, a corpus or dataset can be labeled quickly and cheaply. Additionally, a variety of noise-handling methods have been proposed to circumvent the negative effects that errors in this automatic annotation might have on the performance of a machine learning classifier. In this work, we study two methods of distant supervision for NER: Automatic annotation rules and matching of lists of entities from an external knowledge source. While distant supervision has been successfully used for high resource languages, it is not straight forward that these also work in low-resource settings where the amount of available external information might be much lower. The knowledge graph of Wikidata e.g. contains 4 million person names in English while only 32 thousand such names are available in Yorùbá, many of which are Western names. Orthogonally to distant supervision, the pre-training of word embeddings is a key component for training many neural NLP models. A vector representation for words is built in an unsupervised fashion, i.e. on unlabeled text. Standard embedding techniques include Word2Vec BIBREF14, GloVe BIBREF15 and FastText BIBREF16. In the last two years, contextualized word embeddings have been proposed BIBREF17, BIBREF18, BIBREF19. At the cost of having a much larger model size, these vector representations take the context of words into account and have been shown to outperform other embeddings in many tasks. In this study, we evaluate both types of representations. The key questions we are interested in this paper are: How do NER models perform for Hausa and Yorùbá, two languages from developing countries? Are distant-supervision techniques relying on external information also useful in low-resource settings? How do simple and contextual word embeddings trade-off in model size and performance? Background & Methods ::: Languages Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan. Hausa has several dialects but the one regarded as standard Hausa is the Kananci spoken in the ancient city of Kano in Nigeria. Kananci is the dialect popularly used in many local (e.g VON news) and international news media such as BBC, VOA, DW and Radio France Internationale. Hausa is a tone language but the tones are often ignored in writings, the language is written in a modified Latin alphabet. Despite the popularity of Hausa as an important regional language in Africa and it's popularity in news media, it has very little or no labelled data for common NLP tasks such as text classification, named entity recognition and question answering. Yorùbá language is the third most spoken indigenous language in Africa after Swahilli and Hausa with over 35 million native speakers BIBREF20. The language is native to the South-western part of Nigeria and the Southern part of Benin, and it is also spoken in other countries like Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil. Yorùbá has several dialects but the written language has been standardized by the 1974 Joint Consultative Committee on Education BIBREF21, it has 25 letters without the Latin characters (c, q, v, x and z) and with additional characters (ẹ, gb, ṣ , ọ). Yorùbá is a tone language and the tones are represented as diacritics in written text, there are three tones in Yorùbá namely low ( \), mid (“$-$”) and high ($/$). The mid tone is usually ignored in writings. Often time articles written online including news articles like BBC and VON ignore diacritics. Ignoring diacritics makes it difficult to identify or pronounce words except they are in a context. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) will be mapped to owo without diacritics. Similar to the Hausa language, there are few or no labelled datasets for NLP tasks. Background & Methods ::: Datasets & Embeddings The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered from news sites, forums, weblogs, Wikipedia articles and twitter messages. We use a split of 10k training and 1k test instances. Due to the Hausa data not being publicly available at the time of writing, we could only perform a limited set of experiments on it. The Yorùbá NER data used in this work is the annotated corpus of Global Voices news articles recently released by BIBREF22. The dataset consists of 1,101 sentences (26k tokens) divided into 709 training sentences, 113 validation sentences and 279 test sentences based on 65%/10%/25% split ratio. The named entities in the dataset are personal names (PER), organization (ORG), location (LOC) and date & time (DATE). All other tokens are assigned a tag of "O". For the Yorùbá NER training, we make use of Yorùbá FastText embeddings BIBREF22 and multilingual-BERT that was trained on 104 languages including Yorùbá. Instead of the original FastText embeddings BIBREF16, we chose FastText embeddings trained on a multi-domain and high-quality dataset BIBREF22 because it gave better word similarity scores. Background & Methods ::: Distant and Weak Supervision In this work, we rely on two sources of distant supervision chosen for its ease of application: Rules allow to apply the knowledge of domain experts without the manual effort of labeling each instance. They are especially suited for entities that follow specific patterns, like time phrases in text (see also BIBREF23). We use them for the DATE entity. In Yoruba, date expressions are written with the keywords of “ọj” (day), “oṣù” (month), and “ọdn” (year). Similarly, time expressions are written with keywords such as “wákàtí” (hour), “ìṣjú (minute) and “ìṣjú-aaya (seconds). Relative date and time expressions are also written with keywords “ḷodn” (in the year), “loṣù” (in the month), “lọs” (in the week), “lọj” (in the day). An example of a date expression is: “8th of December, 2018” in Yorùbá translates to “ọj 8 oṣù Ọp, ọdún 2018” Lists of Entities can be obtained from a variety of sources like gazetteers, dictionaries, phone books, census data and Wikipedia categories BIBREF24. In recent years, knowledge bases like Freebase and Wikidata have become another option to retrieve entity lists in a structured way. An entity list is created by extracting all names of that type from a knowledge source (e.g. all person names from Wikidata). If a word or token from the unlabeled text matches an entry in an entity list, it is assigned the corresponding label. Experts can add heuristics to this automatic labeling that improve the matching BIBREF25. These include e.g. normalizing the grammatical form of words or filtering common false positives. Another popular method for low-resource NER is the use of cross-lingual information BIBREF26. Alternatives to distant supervision are crowd-sourcing BIBREF27 and non-expert annotations BIBREF28. Background & Methods ::: Learning With Noisy Labels The labels obtained through distant and weak supervision methods tend to contain a high amount of errors. In the Food101N dataset BIBREF29 around 20% of the automatically obtained labels are incorrect while for Clothing1M BIBREF30 the noise rate is more than 60%. Learning with this additional, noisily labeled data can result in lower classification performance compared to just training on a small set of clean labels (cf. e.g. BIBREF31). A variety of techniques have been proposed to handle label noise like modelling the underlying noise process BIBREF32 and filtering noisy instances BIBREF33, BIBREF34. BIBREF35 gives an in-depth introduction into this field and BIBREF36 survey more recent approaches, focusing on the vision domain. In this work, we experiment with three noise handling techniques. The approach by BIBREF37 estimates a noise channel using the EM algorithm. It treats all labels as possibly noisy and does not distinguish between a clean and a noisy part of the data. In contrast, the method by BIBREF38 leverages the existence of a small set of gold standard labels, something that - in our experience - is often available even in low resource settings. Having such a small set of clean labels is beneficial both for the main model itself as well as for the noise handling technique. Both approaches model the relationship between clean and noisy labels using a confusion matrix. This allows adapting the noisy to the clean label distribution during training. For a setting with 5 labels, it only requires $5^2=25$ additional parameters to estimate which could be beneficial when only few training data is available. The technique by BIBREF39 (adapted to NER by BIBREF38) learns a more complex neural network to clean the noisy labels before training with them. It also takes the features into account when cleaning the noise and it might, therefore, be able to model more complex noise processes. All three techniques can be easily added to the existing standard neural network architectures for NER. Models & Experimental Settings Hausa Distant supervision on Hausa was performed using lists of person names extracted from Wikipedia data. Since we had limited access to the data, we tested a simplified binary NER-tagging setting (PERSON-tags only). As a base model, we used a Bi-LSTM model developed for Part-of-Speech tagging BIBREF40. For noise handling, we apply the Noise Channel model by BIBREF37. Yorùbá For Yorùbá, the entity lists were created by extracting person, location and organization entities from Wikidata in English and Yorùbá. Additionally, a list of person names in Nigeria was obtained from a Yorùbá Name website (8,365 names) and list of popular Hausa, Igbo, Fulani and Yorùbá people on Wikipedia (in total 9,241 names). As manual heuristic, a minimum name length of 2 was set for extraction of PER (except for Nigerian names), LOC and ORG. The Nigerian names were set to include names with a minimum length of 3. For the DATE label, a native Yorùbá speaker wrote some annotation rules using 11 “date keywords” (“ọj”, “ọs”, “os”, “ọdn”, “wákàtí” , “ḷodn”, “ḷodn-un”, “ọdn-un” “lọs” , “lọj”, “ aago”) following these two criteria: (1) A token is a date keyword or follows a date keyword in a sequence. (2) A token is a digit. For Yorùbá, we evaluate four settings with different amounts of clean data, namely 1k, 2k, 4k and the full dataset. As distantly supervised data with noisy labels, the full dataset is used. Additionally, 19,559 words from 18 articles of the Global News Corpus (different from the articles in the training corpus) were automatically annotated. The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions. For noise handling, we experiment with the Confusion Matrix model by BIBREF38 and the Cleaning model by BIBREF39. We repeat all the Bi-LSTM experiments 20 times and report the average F1-score (following the approach by BIBREF41) and the standard error. The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end. This has been shown to give better performance than using word features extracted from BERT to train a classifier BIBREF19. The evaluation result is obtained as an average of 5 runs, we report the F1-score and the standard error in the result section. Results The results for Hausa are given in Table TABREF14. Training with a mix of 50% clean and 50% distantly-supervised data performs 15 F1-score points below using the whole 100% clean data which is to be expected due to the lower quality of the distantly-supervised labels. Using the Noise Channel closes half of this gap. Due to the limited availability of the dataset, we could unfortunately not investigate this further, but it shows already the benefits that are possible through noise-handling. An evaluation of the distant supervision for Yorùbá is given in Table TABREF14. The quality of the automatically annotated labels differs between the classes. Locations perform better than person and organization names, probably due to locations being less diverse and better covered in Wikidata. With simple date rules, we obtain already a 48% F1-score. This shows the importance of leveraging the knowledge of native speakers in automatic annotations. Overall a decent annotation can be obtained by the distant supervision and it even outperforms some of the actual machine learning models in the low-resource setting. Table TABREF14 compares using only Wikidata as data source versus adding additional, manually obtained lists of person names. While adding a list of Yorùbá names only improves recall slightly, the integration of Nigerian names helps to boost recall by 13 points. The experimental results for Yorùbá are given in Figure FIGREF11. The setting differs from the experiments with Hausa in that there is a small clean training set and additional, distantly-supervised data. For the Bi-LSTM model, adding distantly-supervised labels always helps. In the low-resource settings with 1k and 2k labeled data, it more than doubles the performance. Handling the noise in the distant supervision can result in slight improvements. The noise-cleaning approach struggles somewhat while the confusion matrix architecture does give better results in the majority of the scenarios. Training on 5k labeled data with distantly supervised data and noise handling, one can obtain a performance close to using the full 17k manually labeled token. The Bi-LSTM model has 1.50 million parameters (1.53 million for the cleaning model), while BERT has 110 million parameters. There is a clear trade-off between model size and performance. The BERT model is 70 times larger and obtains consistently better results due to its more complex, contextual embeddings pretrained on more data. Still, the F1-score also drops nearly half for the BERT model in the 1k setting compared to the full dataset. For 1k and 2k labeled data, the distant supervision helps to improve the model's performance. However, once the model trained only on clean data reaches a higher F1-score than the distant supervision technique, the model trained on clean and distantly-supervised data deteriorates. This suggests that the BERT model overfits too much on the noise in the distant supervision. Conclusion In this study, we analysed distant supervision techniques and label-noise handling for NER in Hausa and Yorùbá, two languages from developing countries. We showed that they can be successfully leveraged in a realistic low-resource scenario to double a classifier's performance. If model size is not a constraint, the more complex BERT model clearly outperforms the smaller Bi-LSTM architecture. Nevertheless, there is still a large gap between the best performing model on Yorùbá with 66 F1-score and the state-of-the-art in English around 90. We see several interesting follow-ups to these evaluations. In the future, we want to evaluate if noise handling methods can also allow the more complex BERT model to benefit from distant supervision. Regarding the model complexity, it would be interesting to experiment with more compact models like DistilBERT BIBREF42 that reach a similar performance with a smaller model size for high-resource settings. In general, we want to investigate more in-depth the trade-offs between model complexity and trainability in low-resource scenarios. Acknowledgments The experiments on Hausa were possible thanks to the collaboration with Florian Metze and CMU as part of the LORELEI project. Gefördert durch die Deutsche Forschungsgemeinschaft (DFG) – Projektnummer 232722074 – SFB 1102 / Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 232722074 – SFB 1102, the EU-funded Horizon 2020 projects ROXANNE under grant agreement No. 833635 and COMPRISE (http://www.compriseh2020.eu/) under grant agreement No. 3081705.
Bi-LSTM: For low resource <17k clean data: Using distant supervision resulted in huge boost of F1 score (1k eg. ~9 to ~36 wit distant supervision) BERT: <5k clean data boost of F1 (1k eg. ~32 to ~47 with distant supervision)
288613077787159e512e46b79190c91cd4e5b04d
288613077787159e512e46b79190c91cd4e5b04d_0
Q: What classifiers were used in experiments? Text: Introduction Named Entity Recognition (NER) is a classification task that identifies words in a text that refer to entities (such as dates, person, organization and location names). It is a core task of natural language processing and a component for many downstream applications like search engines, knowledge graphs and personal assistants. For high-resource languages like English, this is a well-studied problem with complex state-of-the-art systems reaching close to or above 90% F1-score on the standard datasets CoNLL03 BIBREF0 and Ontonotes BIBREF1. In recent years, research has been extended to a larger pool of languages including those of developing countries BIBREF2, BIBREF3, BIBREF4, BIBREF5. Often, for these languages (like Hausa and Yorùbá studied here), there exists a large population with access to digital devices and internet (and therefore digital text), but natural language processing (NLP) tools do not support them. One key reason is the absence of labeled training data required to train these systems. While manually labeled, gold-standard data is often only available in small quantities, it tends to be much easier to obtain large amounts of unlabeled text. Distant and weak supervision methods can then be used to create labeled data in a (semi-) automatic way. Using context BIBREF6, BIBREF7, external knowledge and resources BIBREF8, BIBREF9, expert rules BIBREF10, BIBREF11 or self-training BIBREF12, BIBREF13, a corpus or dataset can be labeled quickly and cheaply. Additionally, a variety of noise-handling methods have been proposed to circumvent the negative effects that errors in this automatic annotation might have on the performance of a machine learning classifier. In this work, we study two methods of distant supervision for NER: Automatic annotation rules and matching of lists of entities from an external knowledge source. While distant supervision has been successfully used for high resource languages, it is not straight forward that these also work in low-resource settings where the amount of available external information might be much lower. The knowledge graph of Wikidata e.g. contains 4 million person names in English while only 32 thousand such names are available in Yorùbá, many of which are Western names. Orthogonally to distant supervision, the pre-training of word embeddings is a key component for training many neural NLP models. A vector representation for words is built in an unsupervised fashion, i.e. on unlabeled text. Standard embedding techniques include Word2Vec BIBREF14, GloVe BIBREF15 and FastText BIBREF16. In the last two years, contextualized word embeddings have been proposed BIBREF17, BIBREF18, BIBREF19. At the cost of having a much larger model size, these vector representations take the context of words into account and have been shown to outperform other embeddings in many tasks. In this study, we evaluate both types of representations. The key questions we are interested in this paper are: How do NER models perform for Hausa and Yorùbá, two languages from developing countries? Are distant-supervision techniques relying on external information also useful in low-resource settings? How do simple and contextual word embeddings trade-off in model size and performance? Background & Methods ::: Languages Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan. Hausa has several dialects but the one regarded as standard Hausa is the Kananci spoken in the ancient city of Kano in Nigeria. Kananci is the dialect popularly used in many local (e.g VON news) and international news media such as BBC, VOA, DW and Radio France Internationale. Hausa is a tone language but the tones are often ignored in writings, the language is written in a modified Latin alphabet. Despite the popularity of Hausa as an important regional language in Africa and it's popularity in news media, it has very little or no labelled data for common NLP tasks such as text classification, named entity recognition and question answering. Yorùbá language is the third most spoken indigenous language in Africa after Swahilli and Hausa with over 35 million native speakers BIBREF20. The language is native to the South-western part of Nigeria and the Southern part of Benin, and it is also spoken in other countries like Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil. Yorùbá has several dialects but the written language has been standardized by the 1974 Joint Consultative Committee on Education BIBREF21, it has 25 letters without the Latin characters (c, q, v, x and z) and with additional characters (ẹ, gb, ṣ , ọ). Yorùbá is a tone language and the tones are represented as diacritics in written text, there are three tones in Yorùbá namely low ( \), mid (“$-$”) and high ($/$). The mid tone is usually ignored in writings. Often time articles written online including news articles like BBC and VON ignore diacritics. Ignoring diacritics makes it difficult to identify or pronounce words except they are in a context. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) will be mapped to owo without diacritics. Similar to the Hausa language, there are few or no labelled datasets for NLP tasks. Background & Methods ::: Datasets & Embeddings The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered from news sites, forums, weblogs, Wikipedia articles and twitter messages. We use a split of 10k training and 1k test instances. Due to the Hausa data not being publicly available at the time of writing, we could only perform a limited set of experiments on it. The Yorùbá NER data used in this work is the annotated corpus of Global Voices news articles recently released by BIBREF22. The dataset consists of 1,101 sentences (26k tokens) divided into 709 training sentences, 113 validation sentences and 279 test sentences based on 65%/10%/25% split ratio. The named entities in the dataset are personal names (PER), organization (ORG), location (LOC) and date & time (DATE). All other tokens are assigned a tag of "O". For the Yorùbá NER training, we make use of Yorùbá FastText embeddings BIBREF22 and multilingual-BERT that was trained on 104 languages including Yorùbá. Instead of the original FastText embeddings BIBREF16, we chose FastText embeddings trained on a multi-domain and high-quality dataset BIBREF22 because it gave better word similarity scores. Background & Methods ::: Distant and Weak Supervision In this work, we rely on two sources of distant supervision chosen for its ease of application: Rules allow to apply the knowledge of domain experts without the manual effort of labeling each instance. They are especially suited for entities that follow specific patterns, like time phrases in text (see also BIBREF23). We use them for the DATE entity. In Yoruba, date expressions are written with the keywords of “ọj” (day), “oṣù” (month), and “ọdn” (year). Similarly, time expressions are written with keywords such as “wákàtí” (hour), “ìṣjú (minute) and “ìṣjú-aaya (seconds). Relative date and time expressions are also written with keywords “ḷodn” (in the year), “loṣù” (in the month), “lọs” (in the week), “lọj” (in the day). An example of a date expression is: “8th of December, 2018” in Yorùbá translates to “ọj 8 oṣù Ọp, ọdún 2018” Lists of Entities can be obtained from a variety of sources like gazetteers, dictionaries, phone books, census data and Wikipedia categories BIBREF24. In recent years, knowledge bases like Freebase and Wikidata have become another option to retrieve entity lists in a structured way. An entity list is created by extracting all names of that type from a knowledge source (e.g. all person names from Wikidata). If a word or token from the unlabeled text matches an entry in an entity list, it is assigned the corresponding label. Experts can add heuristics to this automatic labeling that improve the matching BIBREF25. These include e.g. normalizing the grammatical form of words or filtering common false positives. Another popular method for low-resource NER is the use of cross-lingual information BIBREF26. Alternatives to distant supervision are crowd-sourcing BIBREF27 and non-expert annotations BIBREF28. Background & Methods ::: Learning With Noisy Labels The labels obtained through distant and weak supervision methods tend to contain a high amount of errors. In the Food101N dataset BIBREF29 around 20% of the automatically obtained labels are incorrect while for Clothing1M BIBREF30 the noise rate is more than 60%. Learning with this additional, noisily labeled data can result in lower classification performance compared to just training on a small set of clean labels (cf. e.g. BIBREF31). A variety of techniques have been proposed to handle label noise like modelling the underlying noise process BIBREF32 and filtering noisy instances BIBREF33, BIBREF34. BIBREF35 gives an in-depth introduction into this field and BIBREF36 survey more recent approaches, focusing on the vision domain. In this work, we experiment with three noise handling techniques. The approach by BIBREF37 estimates a noise channel using the EM algorithm. It treats all labels as possibly noisy and does not distinguish between a clean and a noisy part of the data. In contrast, the method by BIBREF38 leverages the existence of a small set of gold standard labels, something that - in our experience - is often available even in low resource settings. Having such a small set of clean labels is beneficial both for the main model itself as well as for the noise handling technique. Both approaches model the relationship between clean and noisy labels using a confusion matrix. This allows adapting the noisy to the clean label distribution during training. For a setting with 5 labels, it only requires $5^2=25$ additional parameters to estimate which could be beneficial when only few training data is available. The technique by BIBREF39 (adapted to NER by BIBREF38) learns a more complex neural network to clean the noisy labels before training with them. It also takes the features into account when cleaning the noise and it might, therefore, be able to model more complex noise processes. All three techniques can be easily added to the existing standard neural network architectures for NER. Models & Experimental Settings Hausa Distant supervision on Hausa was performed using lists of person names extracted from Wikipedia data. Since we had limited access to the data, we tested a simplified binary NER-tagging setting (PERSON-tags only). As a base model, we used a Bi-LSTM model developed for Part-of-Speech tagging BIBREF40. For noise handling, we apply the Noise Channel model by BIBREF37. Yorùbá For Yorùbá, the entity lists were created by extracting person, location and organization entities from Wikidata in English and Yorùbá. Additionally, a list of person names in Nigeria was obtained from a Yorùbá Name website (8,365 names) and list of popular Hausa, Igbo, Fulani and Yorùbá people on Wikipedia (in total 9,241 names). As manual heuristic, a minimum name length of 2 was set for extraction of PER (except for Nigerian names), LOC and ORG. The Nigerian names were set to include names with a minimum length of 3. For the DATE label, a native Yorùbá speaker wrote some annotation rules using 11 “date keywords” (“ọj”, “ọs”, “os”, “ọdn”, “wákàtí” , “ḷodn”, “ḷodn-un”, “ọdn-un” “lọs” , “lọj”, “ aago”) following these two criteria: (1) A token is a date keyword or follows a date keyword in a sequence. (2) A token is a digit. For Yorùbá, we evaluate four settings with different amounts of clean data, namely 1k, 2k, 4k and the full dataset. As distantly supervised data with noisy labels, the full dataset is used. Additionally, 19,559 words from 18 articles of the Global News Corpus (different from the articles in the training corpus) were automatically annotated. The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions. For noise handling, we experiment with the Confusion Matrix model by BIBREF38 and the Cleaning model by BIBREF39. We repeat all the Bi-LSTM experiments 20 times and report the average F1-score (following the approach by BIBREF41) and the standard error. The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end. This has been shown to give better performance than using word features extracted from BERT to train a classifier BIBREF19. The evaluation result is obtained as an average of 5 runs, we report the F1-score and the standard error in the result section. Results The results for Hausa are given in Table TABREF14. Training with a mix of 50% clean and 50% distantly-supervised data performs 15 F1-score points below using the whole 100% clean data which is to be expected due to the lower quality of the distantly-supervised labels. Using the Noise Channel closes half of this gap. Due to the limited availability of the dataset, we could unfortunately not investigate this further, but it shows already the benefits that are possible through noise-handling. An evaluation of the distant supervision for Yorùbá is given in Table TABREF14. The quality of the automatically annotated labels differs between the classes. Locations perform better than person and organization names, probably due to locations being less diverse and better covered in Wikidata. With simple date rules, we obtain already a 48% F1-score. This shows the importance of leveraging the knowledge of native speakers in automatic annotations. Overall a decent annotation can be obtained by the distant supervision and it even outperforms some of the actual machine learning models in the low-resource setting. Table TABREF14 compares using only Wikidata as data source versus adding additional, manually obtained lists of person names. While adding a list of Yorùbá names only improves recall slightly, the integration of Nigerian names helps to boost recall by 13 points. The experimental results for Yorùbá are given in Figure FIGREF11. The setting differs from the experiments with Hausa in that there is a small clean training set and additional, distantly-supervised data. For the Bi-LSTM model, adding distantly-supervised labels always helps. In the low-resource settings with 1k and 2k labeled data, it more than doubles the performance. Handling the noise in the distant supervision can result in slight improvements. The noise-cleaning approach struggles somewhat while the confusion matrix architecture does give better results in the majority of the scenarios. Training on 5k labeled data with distantly supervised data and noise handling, one can obtain a performance close to using the full 17k manually labeled token. The Bi-LSTM model has 1.50 million parameters (1.53 million for the cleaning model), while BERT has 110 million parameters. There is a clear trade-off between model size and performance. The BERT model is 70 times larger and obtains consistently better results due to its more complex, contextual embeddings pretrained on more data. Still, the F1-score also drops nearly half for the BERT model in the 1k setting compared to the full dataset. For 1k and 2k labeled data, the distant supervision helps to improve the model's performance. However, once the model trained only on clean data reaches a higher F1-score than the distant supervision technique, the model trained on clean and distantly-supervised data deteriorates. This suggests that the BERT model overfits too much on the noise in the distant supervision. Conclusion In this study, we analysed distant supervision techniques and label-noise handling for NER in Hausa and Yorùbá, two languages from developing countries. We showed that they can be successfully leveraged in a realistic low-resource scenario to double a classifier's performance. If model size is not a constraint, the more complex BERT model clearly outperforms the smaller Bi-LSTM architecture. Nevertheless, there is still a large gap between the best performing model on Yorùbá with 66 F1-score and the state-of-the-art in English around 90. We see several interesting follow-ups to these evaluations. In the future, we want to evaluate if noise handling methods can also allow the more complex BERT model to benefit from distant supervision. Regarding the model complexity, it would be interesting to experiment with more compact models like DistilBERT BIBREF42 that reach a similar performance with a smaller model size for high-resource settings. In general, we want to investigate more in-depth the trade-offs between model complexity and trainability in low-resource scenarios. Acknowledgments The experiments on Hausa were possible thanks to the collaboration with Florian Metze and CMU as part of the LORELEI project. Gefördert durch die Deutsche Forschungsgemeinschaft (DFG) – Projektnummer 232722074 – SFB 1102 / Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 232722074 – SFB 1102, the EU-funded Horizon 2020 projects ROXANNE under grant agreement No. 833635 and COMPRISE (http://www.compriseh2020.eu/) under grant agreement No. 3081705.
Bi-LSTM, BERT
cf74ff49dfcdda2cd67a896b4b982a1c3ee51531
cf74ff49dfcdda2cd67a896b4b982a1c3ee51531_0
Q: In which countries are Hausa and Yor\`ub\'a spoken? Text: Introduction Named Entity Recognition (NER) is a classification task that identifies words in a text that refer to entities (such as dates, person, organization and location names). It is a core task of natural language processing and a component for many downstream applications like search engines, knowledge graphs and personal assistants. For high-resource languages like English, this is a well-studied problem with complex state-of-the-art systems reaching close to or above 90% F1-score on the standard datasets CoNLL03 BIBREF0 and Ontonotes BIBREF1. In recent years, research has been extended to a larger pool of languages including those of developing countries BIBREF2, BIBREF3, BIBREF4, BIBREF5. Often, for these languages (like Hausa and Yorùbá studied here), there exists a large population with access to digital devices and internet (and therefore digital text), but natural language processing (NLP) tools do not support them. One key reason is the absence of labeled training data required to train these systems. While manually labeled, gold-standard data is often only available in small quantities, it tends to be much easier to obtain large amounts of unlabeled text. Distant and weak supervision methods can then be used to create labeled data in a (semi-) automatic way. Using context BIBREF6, BIBREF7, external knowledge and resources BIBREF8, BIBREF9, expert rules BIBREF10, BIBREF11 or self-training BIBREF12, BIBREF13, a corpus or dataset can be labeled quickly and cheaply. Additionally, a variety of noise-handling methods have been proposed to circumvent the negative effects that errors in this automatic annotation might have on the performance of a machine learning classifier. In this work, we study two methods of distant supervision for NER: Automatic annotation rules and matching of lists of entities from an external knowledge source. While distant supervision has been successfully used for high resource languages, it is not straight forward that these also work in low-resource settings where the amount of available external information might be much lower. The knowledge graph of Wikidata e.g. contains 4 million person names in English while only 32 thousand such names are available in Yorùbá, many of which are Western names. Orthogonally to distant supervision, the pre-training of word embeddings is a key component for training many neural NLP models. A vector representation for words is built in an unsupervised fashion, i.e. on unlabeled text. Standard embedding techniques include Word2Vec BIBREF14, GloVe BIBREF15 and FastText BIBREF16. In the last two years, contextualized word embeddings have been proposed BIBREF17, BIBREF18, BIBREF19. At the cost of having a much larger model size, these vector representations take the context of words into account and have been shown to outperform other embeddings in many tasks. In this study, we evaluate both types of representations. The key questions we are interested in this paper are: How do NER models perform for Hausa and Yorùbá, two languages from developing countries? Are distant-supervision techniques relying on external information also useful in low-resource settings? How do simple and contextual word embeddings trade-off in model size and performance? Background & Methods ::: Languages Hausa language is the second most spoken indigenous language in Africa with over 40 million native speakers BIBREF20, and one of the three major languages in Nigeria, along with Igbo and Yorùbá. The language is native to the Northern part of Nigeria and the southern part of Niger, and it is widely spoken in West and Central Africa as a trade language in eight other countries: Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan. Hausa has several dialects but the one regarded as standard Hausa is the Kananci spoken in the ancient city of Kano in Nigeria. Kananci is the dialect popularly used in many local (e.g VON news) and international news media such as BBC, VOA, DW and Radio France Internationale. Hausa is a tone language but the tones are often ignored in writings, the language is written in a modified Latin alphabet. Despite the popularity of Hausa as an important regional language in Africa and it's popularity in news media, it has very little or no labelled data for common NLP tasks such as text classification, named entity recognition and question answering. Yorùbá language is the third most spoken indigenous language in Africa after Swahilli and Hausa with over 35 million native speakers BIBREF20. The language is native to the South-western part of Nigeria and the Southern part of Benin, and it is also spoken in other countries like Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil. Yorùbá has several dialects but the written language has been standardized by the 1974 Joint Consultative Committee on Education BIBREF21, it has 25 letters without the Latin characters (c, q, v, x and z) and with additional characters (ẹ, gb, ṣ , ọ). Yorùbá is a tone language and the tones are represented as diacritics in written text, there are three tones in Yorùbá namely low ( \), mid (“$-$”) and high ($/$). The mid tone is usually ignored in writings. Often time articles written online including news articles like BBC and VON ignore diacritics. Ignoring diacritics makes it difficult to identify or pronounce words except they are in a context. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) will be mapped to owo without diacritics. Similar to the Hausa language, there are few or no labelled datasets for NLP tasks. Background & Methods ::: Datasets & Embeddings The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered from news sites, forums, weblogs, Wikipedia articles and twitter messages. We use a split of 10k training and 1k test instances. Due to the Hausa data not being publicly available at the time of writing, we could only perform a limited set of experiments on it. The Yorùbá NER data used in this work is the annotated corpus of Global Voices news articles recently released by BIBREF22. The dataset consists of 1,101 sentences (26k tokens) divided into 709 training sentences, 113 validation sentences and 279 test sentences based on 65%/10%/25% split ratio. The named entities in the dataset are personal names (PER), organization (ORG), location (LOC) and date & time (DATE). All other tokens are assigned a tag of "O". For the Yorùbá NER training, we make use of Yorùbá FastText embeddings BIBREF22 and multilingual-BERT that was trained on 104 languages including Yorùbá. Instead of the original FastText embeddings BIBREF16, we chose FastText embeddings trained on a multi-domain and high-quality dataset BIBREF22 because it gave better word similarity scores. Background & Methods ::: Distant and Weak Supervision In this work, we rely on two sources of distant supervision chosen for its ease of application: Rules allow to apply the knowledge of domain experts without the manual effort of labeling each instance. They are especially suited for entities that follow specific patterns, like time phrases in text (see also BIBREF23). We use them for the DATE entity. In Yoruba, date expressions are written with the keywords of “ọj” (day), “oṣù” (month), and “ọdn” (year). Similarly, time expressions are written with keywords such as “wákàtí” (hour), “ìṣjú (minute) and “ìṣjú-aaya (seconds). Relative date and time expressions are also written with keywords “ḷodn” (in the year), “loṣù” (in the month), “lọs” (in the week), “lọj” (in the day). An example of a date expression is: “8th of December, 2018” in Yorùbá translates to “ọj 8 oṣù Ọp, ọdún 2018” Lists of Entities can be obtained from a variety of sources like gazetteers, dictionaries, phone books, census data and Wikipedia categories BIBREF24. In recent years, knowledge bases like Freebase and Wikidata have become another option to retrieve entity lists in a structured way. An entity list is created by extracting all names of that type from a knowledge source (e.g. all person names from Wikidata). If a word or token from the unlabeled text matches an entry in an entity list, it is assigned the corresponding label. Experts can add heuristics to this automatic labeling that improve the matching BIBREF25. These include e.g. normalizing the grammatical form of words or filtering common false positives. Another popular method for low-resource NER is the use of cross-lingual information BIBREF26. Alternatives to distant supervision are crowd-sourcing BIBREF27 and non-expert annotations BIBREF28. Background & Methods ::: Learning With Noisy Labels The labels obtained through distant and weak supervision methods tend to contain a high amount of errors. In the Food101N dataset BIBREF29 around 20% of the automatically obtained labels are incorrect while for Clothing1M BIBREF30 the noise rate is more than 60%. Learning with this additional, noisily labeled data can result in lower classification performance compared to just training on a small set of clean labels (cf. e.g. BIBREF31). A variety of techniques have been proposed to handle label noise like modelling the underlying noise process BIBREF32 and filtering noisy instances BIBREF33, BIBREF34. BIBREF35 gives an in-depth introduction into this field and BIBREF36 survey more recent approaches, focusing on the vision domain. In this work, we experiment with three noise handling techniques. The approach by BIBREF37 estimates a noise channel using the EM algorithm. It treats all labels as possibly noisy and does not distinguish between a clean and a noisy part of the data. In contrast, the method by BIBREF38 leverages the existence of a small set of gold standard labels, something that - in our experience - is often available even in low resource settings. Having such a small set of clean labels is beneficial both for the main model itself as well as for the noise handling technique. Both approaches model the relationship between clean and noisy labels using a confusion matrix. This allows adapting the noisy to the clean label distribution during training. For a setting with 5 labels, it only requires $5^2=25$ additional parameters to estimate which could be beneficial when only few training data is available. The technique by BIBREF39 (adapted to NER by BIBREF38) learns a more complex neural network to clean the noisy labels before training with them. It also takes the features into account when cleaning the noise and it might, therefore, be able to model more complex noise processes. All three techniques can be easily added to the existing standard neural network architectures for NER. Models & Experimental Settings Hausa Distant supervision on Hausa was performed using lists of person names extracted from Wikipedia data. Since we had limited access to the data, we tested a simplified binary NER-tagging setting (PERSON-tags only). As a base model, we used a Bi-LSTM model developed for Part-of-Speech tagging BIBREF40. For noise handling, we apply the Noise Channel model by BIBREF37. Yorùbá For Yorùbá, the entity lists were created by extracting person, location and organization entities from Wikidata in English and Yorùbá. Additionally, a list of person names in Nigeria was obtained from a Yorùbá Name website (8,365 names) and list of popular Hausa, Igbo, Fulani and Yorùbá people on Wikipedia (in total 9,241 names). As manual heuristic, a minimum name length of 2 was set for extraction of PER (except for Nigerian names), LOC and ORG. The Nigerian names were set to include names with a minimum length of 3. For the DATE label, a native Yorùbá speaker wrote some annotation rules using 11 “date keywords” (“ọj”, “ọs”, “os”, “ọdn”, “wákàtí” , “ḷodn”, “ḷodn-un”, “ọdn-un” “lọs” , “lọj”, “ aago”) following these two criteria: (1) A token is a date keyword or follows a date keyword in a sequence. (2) A token is a digit. For Yorùbá, we evaluate four settings with different amounts of clean data, namely 1k, 2k, 4k and the full dataset. As distantly supervised data with noisy labels, the full dataset is used. Additionally, 19,559 words from 18 articles of the Global News Corpus (different from the articles in the training corpus) were automatically annotated. The Bi-LSTM model consists of a Bi-LSTM layer followed by a linear layer to extract input features. The Bi-LSTM layer has a 300-dimensional hidden state for each direction. For the final classification, an additional linear layer is added to output predicted class distributions. For noise handling, we experiment with the Confusion Matrix model by BIBREF38 and the Cleaning model by BIBREF39. We repeat all the Bi-LSTM experiments 20 times and report the average F1-score (following the approach by BIBREF41) and the standard error. The BERT model is obtained by fine-tuning the pre-trained BERT embeddings on NER data with an additional untrained CRF classifier. We fine-tuned all the parameters of BERT including that of the CRF end-to-end. This has been shown to give better performance than using word features extracted from BERT to train a classifier BIBREF19. The evaluation result is obtained as an average of 5 runs, we report the F1-score and the standard error in the result section. Results The results for Hausa are given in Table TABREF14. Training with a mix of 50% clean and 50% distantly-supervised data performs 15 F1-score points below using the whole 100% clean data which is to be expected due to the lower quality of the distantly-supervised labels. Using the Noise Channel closes half of this gap. Due to the limited availability of the dataset, we could unfortunately not investigate this further, but it shows already the benefits that are possible through noise-handling. An evaluation of the distant supervision for Yorùbá is given in Table TABREF14. The quality of the automatically annotated labels differs between the classes. Locations perform better than person and organization names, probably due to locations being less diverse and better covered in Wikidata. With simple date rules, we obtain already a 48% F1-score. This shows the importance of leveraging the knowledge of native speakers in automatic annotations. Overall a decent annotation can be obtained by the distant supervision and it even outperforms some of the actual machine learning models in the low-resource setting. Table TABREF14 compares using only Wikidata as data source versus adding additional, manually obtained lists of person names. While adding a list of Yorùbá names only improves recall slightly, the integration of Nigerian names helps to boost recall by 13 points. The experimental results for Yorùbá are given in Figure FIGREF11. The setting differs from the experiments with Hausa in that there is a small clean training set and additional, distantly-supervised data. For the Bi-LSTM model, adding distantly-supervised labels always helps. In the low-resource settings with 1k and 2k labeled data, it more than doubles the performance. Handling the noise in the distant supervision can result in slight improvements. The noise-cleaning approach struggles somewhat while the confusion matrix architecture does give better results in the majority of the scenarios. Training on 5k labeled data with distantly supervised data and noise handling, one can obtain a performance close to using the full 17k manually labeled token. The Bi-LSTM model has 1.50 million parameters (1.53 million for the cleaning model), while BERT has 110 million parameters. There is a clear trade-off between model size and performance. The BERT model is 70 times larger and obtains consistently better results due to its more complex, contextual embeddings pretrained on more data. Still, the F1-score also drops nearly half for the BERT model in the 1k setting compared to the full dataset. For 1k and 2k labeled data, the distant supervision helps to improve the model's performance. However, once the model trained only on clean data reaches a higher F1-score than the distant supervision technique, the model trained on clean and distantly-supervised data deteriorates. This suggests that the BERT model overfits too much on the noise in the distant supervision. Conclusion In this study, we analysed distant supervision techniques and label-noise handling for NER in Hausa and Yorùbá, two languages from developing countries. We showed that they can be successfully leveraged in a realistic low-resource scenario to double a classifier's performance. If model size is not a constraint, the more complex BERT model clearly outperforms the smaller Bi-LSTM architecture. Nevertheless, there is still a large gap between the best performing model on Yorùbá with 66 F1-score and the state-of-the-art in English around 90. We see several interesting follow-ups to these evaluations. In the future, we want to evaluate if noise handling methods can also allow the more complex BERT model to benefit from distant supervision. Regarding the model complexity, it would be interesting to experiment with more compact models like DistilBERT BIBREF42 that reach a similar performance with a smaller model size for high-resource settings. In general, we want to investigate more in-depth the trade-offs between model complexity and trainability in low-resource scenarios. Acknowledgments The experiments on Hausa were possible thanks to the collaboration with Florian Metze and CMU as part of the LORELEI project. Gefördert durch die Deutsche Forschungsgemeinschaft (DFG) – Projektnummer 232722074 – SFB 1102 / Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 232722074 – SFB 1102, the EU-funded Horizon 2020 projects ROXANNE under grant agreement No. 833635 and COMPRISE (http://www.compriseh2020.eu/) under grant agreement No. 3081705.
Nigeria, Benin, Ghana, Cameroon, Togo, Côte d'Ivoire, Chad, Burkina Faso, and Sudan, Republic of Togo, Ghana, Côte d'Ivoire, Sierra Leone, Cuba and Brazil
827c58f6cab6c6fe7a6c43bdc71150b61ba0eed4
827c58f6cab6c6fe7a6c43bdc71150b61ba0eed4_0
Q: What is the agreement score of their annotated dataset? Text: Background In the light of increased vaccine hesitance in various countries, consistent monitoring of public beliefs and opinions about the national immunization program is important. Besides performing qualitative research and surveys, real-time monitoring of social media data about vaccination is a valuable tool to this end. The advantage is that one is able to detect and respond to possible vaccine concerns in a timely manner, that it generates continuous data and that it consists of unsolicited, voluntary user-generated content. Several studies that analyse tweets have already been conducted, providing insight in the content that was tweeted most during the 2009 H1N1 outbreak BIBREF0, the information flow between users with a certain sentiment during this outbreak BIBREF1, or trends in tweets that convey, for example, the worries on efficacy of HPV vaccines BIBREF2, BIBREF3. While human coders are best at deploying world knowledge and interpreting the intention behind a text, manual coding of tweets is laborious. The above-mentioned studies therefore aimed at developing and evaluating a system to code tweets automatically. There are several systems in place that make use of this automatic coding. The Vaccine Confidence Project BIBREF4 is a real-time worldwide internet monitor for vaccine concerns. The Europe Media Monitor (EMM) BIBREF5 was installed to support EU institutions and Member State organizations with, for example, the analysis real-time news for medical and health-related topics and with early warning alerts per category and country. MEDISYS, derived from the EMM and developed by the Joint Research Center of the European Commission BIBREF6, is a media monitoring system providing event-based surveillance to rapidly identify potential public health threats based on information from media reports. These systems cannot be used directly for the Netherlands because they do not contain search words in Dutch, are missing an opinion-detection functionality, or do not include categories of the proper specificity. Furthermore, opinions towards vaccination are contextualized by national debates rather than a multinational debate BIBREF7, which implies that a system for monitoring vaccination stance on Twitter should ideally be trained and applied to tweets with a similar language and nationality. Finally, by creating an automatic system for mining public opinions on vaccination concerns, one can continue training and adapting the system. We therefore believe it will be valuable to build our own system. Besides analysing the content of tweets, several other applications that use social media with regard to vaccination have been proposed. They, for example, use data about internet search activity and numbers of tweets as a proxy for (changes in) vaccination coverage or for estimating epidemiological patterns. Huang et al. BIBREF8 found a high positive correlation between reported influenza attitude and behavior on Twitter and influenza vaccination coverage in the US. In contrast, Aquino et al. BIBREF9 found an inverse correlation between Mumps, Measles, Rubella (MMR) vaccination coverage and tweets, Facebook posts and internet search activity about autism and MMR vaccine in Italy. This outcome was possibly due to a decision of the Court of Justice in one of the regions to award vaccine-injury compensation for a case of autism. Wagner, Lampos, Cox and Pebody BIBREF10 assessed the usefulness of geolocated Twitter posts and Google search as source data to model influenza rates, by measuring their fit to the traditional surveillance outcomes and analyzing the data quality. They find that Google search could be a useful alternative to the regular means of surveillance, while Twitter posts are not correlating well due to a lower volume and bias in demographics. Lampos, de Bie and Christianinni BIBREF11 also make use of geolocated Twitter posts to track academics, and present a monitoring tool with a daily flu-score based on weighted keywords. Various studies BIBREF12, BIBREF13, BIBREF14 show that estimates of influenza-like illness symptoms mentioned on Twitter can be exploited to track reported disease levels relatively accurately. However, other studies BIBREF15, BIBREF16 showed that this was only the case when looking at severe cases (e.g. hospitalizations, deaths) or only for the start of the epidemic when interest from journalists was still high. Other research focuses on detecting discussion communities on vaccination in Twitter BIBREF17 or analysing semantic networks BIBREF18 to identify the most relevant and influential users as well as to better understand complex drivers of vaccine hesitancy for public health communication. Tangherlini et al. BIBREF19 explore what can be learned about the vaccination discussion from the realm of “mommy blogs”: parents posting messages about children’s health care on forum websites. They aim to obtain insights in the underlying narrative frameworks, and analyse the topics of the messages using Latent Dirichlet Allocation (LDA) BIBREF20. They find that the most prominent frame is a focus on the exemption of one’s child from receiving a vaccination in school. The motivation against vaccination is most prominently based on personal belief about health, but could also be grounded in religion. Surian et al. BIBREF21 also apply topic modeling to distinguish dominant opinions in the discussion about vaccination, and focus on HPV vaccination as discussed on Twitter. They find a common distinction between tweets reporting on personal experience and tweets that they characterize as `evidence’ (statements of having had a vaccination) and `advocacy’ (statements that support vaccination). Most similar to our work is the study by Du, Xu, Song, Liu and Tao BIBREF2. With the ultimate aim to improve the vaccine uptake, they applied supervised machine learning to analyse the stance towards vaccination as conveyed on social media. Messages were labeled as either related to vaccination or unrelated, and, when related, as ‘positive’, ‘negative’ or ‘neutral’. The ‘negative’ category was further broken down into several considerations, such as ‘safety’ and ‘cost’. After having annotated 6,000 tweets, they trained a classifier on different combinations of features, obtaining the highest macro F1-score (the average of the separate F1-scores for each prediction category) of $0.50$ and micro F1-score (the F1-score over all predictions) of $0.73$. Tweets with a negative stance that point to safety risks could best be predicted, at an optimal F1 score of $0.75$, while the other five sub-categories with a negative stance were predicted at an F1 score below $0.5$ or even $0.0$. Like Du et al. BIBREF2, we focus on analysing sentiment about vaccination using Twitter as a data source and applying supervised machine learning approaches to extract public opinion from tweets automatically. In contrast, in our evaluation we focus on detecting messages with a negative stance in particular. Accurately monitoring such messages helps to recognize discord in an early stage and take appropriate action. We do train machine learning classifiers on modeling other categories than the negative stance, evaluating whether this is beneficial to detecting tweets with a negative stance. For example, we study whether it is beneficial to this task to model tweets with a positive and neutral stance as well. We also inquire whether a more fine-grained categorization of sentiment (e.g.: worry, relief, frustration and informing) offers an advantage. Apart from comparing performance in the context of different categorizations, we compare different machine learning algorithms and compare data with different levels of annotation reliability. Finally, the performance of the resulting systems is compared to regular sentiment analysis common to social media monitoring dashboards. At the public health institute in the Netherlands, we make use of social media monitoring tools offered by Coosto. For defining whether a message is positive, negative or neutral with regard to vaccination, this system makes use of the presence or absence of positive or negative words in the messages. We believe that we could increase the sensitivity and specificity of the sentiment analysis by using supervised machine learning approaches trained on a manually coded dataset. The performance of our machine learning approaches is therefore compared to the sentiment analysis that is currently applied in the Coosto tool. Implementation We set out to curate a corpus of tweets annotated for their stance towards vaccination, and to employ this corpus to train a machine learning classifier to distinguish tweets with a negative stance towards vaccination from other tweets. In the following, we will describe the stages of data acquisition, from collection to labeling. Implementation ::: Data collection We queried Twitter messages that refer to a vaccination-related key term from TwiNL , a database with IDs of Dutch Twitter messages from January 2012 onwards BIBREF22. In contrast to the open Twitter Search API, which only allows one to query tweets posted within the last seven days, TwiNL makes it possible to collect a much larger sample of Twitter posts, ranging several years. We queried TwiNL for different key terms that relate to the topic of vaccination in a five-year period, ranging from January 1, 2012 until February 8, 2017. Query terms that we used were the word `vaccinatie’ (Dutch for `vaccination’) and six other terms closely related to vaccination, with and without a hashtag (`#’). Among the six words is `rijksvaccinatieprogramma’, which refers to the vaccination programme in The Netherlands. An overview of all query terms along with the number of tweets that could be collected based on them is displayed in Table TABREF5. We collected a total of 96,566 tweets from TwiNL, which we filtered in a number of ways. First, retweets were removed, as we wanted to focus on unique messages. This led to a removal of 31% of the messages. Second, we filtered out messages that contain a URL. Such messages often share a news headline and include a URL to refer to the complete news message. As a news headline does not reflect the stance of the person who posted the tweet, we decided to apply this filtering step. It is likely that part of the messages with a URL do include a message composed by the sender itself, but this step helps to clean many unwanted messages. Third, we removed messages that include a word related to animals and traveling (`dier’, animal; `landbouw’, agriculture; and `teek’, tick), as we strictly focus on messages that refer to vaccination that is part of the governmental vaccination program. 27,534 messages were left after filtering. This is the data set that is used for experimentation. Implementation ::: Data annotation The stance towards vaccination was categorized into `Negative’, `Neutral’, `Positive’ and `Not clear’. The latter category was essential, as some posts do not convey enough information about the stance of the writer. In addition to the four-valued stance classes we included separate classes grouped under relevance, subject and sentiment as annotation categories. With these additional categorizations we aimed to obtain a precise grasp of all possibly relevant tweet characteristics in relation to vaccination, which could help in a machine learning setting. The relevance categories were divided into `Relevant’, `Relevant abroad’ and `Irrelevant’. Despite our selection of vaccination-related keywords, tweets that mention these words might not refer to vaccination at all. A word like `vaccine’ might be used in a metaphorical sense, or the tweet might refer to vaccination of animals. The subject categorization was included to describe what the tweet is about primarily: `Vaccine’, `Disease’ or `Both’. We expected that a significant part of the tweets would focus on the severeness of a disease when discussing vaccination. Distinguishing these tweets could help the detection of the stance as well. Finally, the sentiment of tweets was categorized into `Informative’, `Angry/Frustration’, `Worried/Fear/Doubts’, `Relieved’ and `Other’, where the latter category lumps together occasional cases of humor, sarcasm, personal experience, and question raised. These categories were based on the article by BIBREF0, and emerged from analysing their H1N1-related tweets. The `Informative’ category refers to a typical type of message in which information is shared, potentially in support of a negative or positive stance towards vaccination. If the message contained more than one sentiment, the first sentiment identified was chosen. Table TABREF6 shows examples of tweets for the above-mentioned categories. We aimed at a sufficient number of annotated tweets to feed a machine learning classifier with. The majority of tweets were annotated twice. We built an annotation interface catered to the task. Upon being presented with the text of a Twitter post, the annotator was first asked whether the tweet was relevant. In case it was deemed relevant, the tweet could be annotated for the other categorizations. Otherwise, the user could click `OK’ after which he or she was directly presented with a new Twitter post. The annotator was presented with sampled messages that were either not annotated yet or annotated once. We ensured a fairly equal distribution of these two types, so that most tweets would be annotated twice. As annotators, we hired four student assistants and additionally made use of the Radboud Research Participation System. We asked participants to annotate for the duration of an hour, in exchange for a voucher valued ten Euros, or one course credit. Before starting the annotation, the participants were asked to read the annotation manual, with examples and an extensive description of the categories, and were presented with a short training round in which feedback on their annotations was given. The annotation period lasted for six weeks. We stopped when the number of applicants dropped. A total of 8,259 tweets were annotated, of which 6,472 were annotated twice (78%). 65 annotators joined in the study, with an average of $229.5$ annotated tweets per person. The number of annotations per person varied considerably, with $2,388$ tweets coded by the most active annotator. This variation is due to the different ways in which annotators were recruited: student-assistants were recruited for several days, while participants recruited through the Radboud Research Participation System could only join for the duration of an hour. We calculated inter-annotator agreement by Krippendorff's Alpha BIBREF23, which accounts for different annotator pairs and empty values. To also zoom in on the particular agreement by category, we calculated mutual F-scores for each of the categories. This metric is typically used to evaluate system performance by category on gold standard data, but could also be applied to annotation pairs by alternating the roles of the two annotators between classifier and ground truth. A summary of the agreement by categorization is given in Table TABREF10. While both the Relevance and Subject categorizations are annotated at a percent agreement of $0.71$ and $0.70$, their agreement scores are only fair, at $\alpha =0.27$ and $\alpha =0.29$. The percent agreement on Stance and Sentiment, which carry more categories than the former two, is $0.54$ for both. Their agreement scores are also fair, at $\alpha =0.35$ and $\alpha =0.34$. The mutual F-scores show marked differences in agreement by category, where the categories that were annotated most often typically yield a higher score. This holds for the Relevant category ($0.81$), the Vaccine category ($0.79$) and the Positive category ($0.64$). The Negative category yields a mutual F-score of $0.42$, which is higher than the more frequently annotated categories Neutral ($0.23$) and Not clear ($0.31$). We found that these categories are often confused. After combining the annotations of the two, the stance agreement would be increased to $\alpha =0.43$. The rather low agreement over the annotation categories indicates the difficulty of interpreting stance and sentiment in tweets that discuss the topic of vaccination. We therefore proceed with caution to categorize the data for training and testing our models. The agreed upon tweets will form the basis of our experimental data, as was proposed by Jakubiçek, Kovar and Rychly BIBREF24, while the other data is added as additional training material to see if the added quantity is beneficial to performance. We will also annotate a sample of the agreed upon tweets, to make sure that these data are reliable in spite of the low agreement rate. Implementation ::: Data categorization The labeled data that we composed based on the annotated tweets are displayed in Table TABREF11. We combined the Relevant and Relevant abroad categories into one category (`Relevant’), as only a small part of the tweets was annotated as Relevant abroad. We did not make use of the subject annotations, as a small minority of the tweets that were relevant referred a disease only. For the most important categorization, stance, we included all annotated labels. Finally, we combined part of the more frequent sentiment categories with Positive. We distinguish three types of labeled tweets: `strict’, `lax’ and `one’. The strictly labeled tweets were labeled by both annotators with the same label. The lax labels describe tweets that were only annotated with a certain category by one of the coders. The categories were ordered by importance to decide on the lax labels. For instance, in case of the third categorization, Negative was preferred over Positive, followed by Neutral, Not clear and Irrelevant. If one of the annotators labeled a tweet as Positive and the other as Neutral, the lax label for this tweet is Positive. In table TABREF11, the categories are ordered by preference as imposed on the lax labeling. The `one' labeling applies to all tweets that were annotated by only one annotator. Note that the total counts can differ between label categorizations due to the lax labeling: the counts for Positive labels in the Polarity + sentiment labeling (Positive + Frustration, Positive + Information and Positive + other) do not add up to the count of the Positive label in the Polarity labeling. With the `strict’, `lax’ and `one’ labeling, we end up with four variants of data to experiment with: only strict, strict + lax, strict + one and strict + lax + one. The strict data, which are most reliable, are used in all variants. By comparing different combinations of training data, we test whether the addition of less reliably labeled data (lax and/or one) boosts performance. The four labelings have an increasing granularity, where the numbers of examples for the Negative category are stable across each labeling. In the first labeling, these examples are contrasted with any other tweet. It hence comprises a binary classification task. In the second labeling, irrelevant tweets are indicated in a separate category. The Other class here represents all relevant tweets that do not convey a negative stance towards vaccination. In the third labeling, this class is specified as the stance categories Positive, Neutral and Not clear. In the fourth labeling, the Positive category, which is the most frequent polarity class, is further split into `Positive + frustration’, `Positive + Information’ and `Positive + Other’. Positivity about vaccination combined with a frustration sentiment reflects tweets that convey frustration about the arguments of people who are negative about vaccination (e.g.: "I just read that a 17 year old girl died of the measles. Because she did not want an inoculation due to strict religious beliefs. -.- #ridiculous"). The Positive + Information category reflects tweets that provide information in favor of vaccination, or combined with a positive stance towards vaccination (e.g.: "#shingles is especially common with the elderly and chronically diseased. #vaccination can prevent much suffering. #prevention"). In line with Kovár, Rychlý and Jakubíček BIBREF25, we evaluate system performance only on the reliable part of the annotations - the instances labeled with the same label by two annotators. As the overall agreement is not sufficient, with Krippendorff's Alpha ranging between $0.27$ and $0.35$, the first author annotated 300 tweets sampled from the strict data (without knowledge of the annotations) to rule out the possibility that these agreed upon annotations are due to chance agreement. Comparing these new annotations to the original ones, the Negative category and the Positive category are agreed upon at mutual F-scores of $0.70$ and $0.81$. The percent agreement on the binary classification scheme (e.g.: Negative versus Other) is $0.92$, with $\alpha =0.67$, which decreases to $\alpha =0.55$ for the Relevance categorization, $\alpha =0.54$ for the Polarity categorization and $\alpha =0.43$ for the Polarity + Sentiment categorization. We find that instances of a negative and positive stance can be clearly identified by humans, while the labels Neutral and Not Clear are less clear cut. Since it is our focus to model tweets with a negative stance, the agreement on the binary decision between Negative and Other is just sufficient to use for experimentation based on Krippendorff’s BIBREF26 remark that “$\alpha \ge .667$ is the lowest conceivable limit” (p.241). In our experimental set-up we will therefore only evaluate our system performance on distinguishing the Negative category from any other category in the strict data. Implementation ::: Experimental Set-up For each combination of labeling (four types of labeling) and training data (four combinations of training data) we train a machine learning classifier to best distinguish the given labels. Two different classifiers are compared: Multinomial Naive Bayes and Support Vector Machines (SVM). In total, this makes for 32 variants (4 labelings $\times $ 4 combinations of training data $\times $ 2 classifiers). All settings are tested through ten-fold cross-validation on the strict data and are compared against two rule-based sentiment analysis baselines and two random baselines. All components of the experimental set-up are described in more detail below. Implementation ::: Experimental Set-up ::: Preprocessing To properly distinguish word tokens and punctuation we tokenized the tweets by means of Ucto, a rule-based tokenizer with good performance on the Dutch language, and with a configuration specific for Twitter. Tokens were lowercased in order to focus on the content. Punctuation was maintained, as well as emoji and emoticons. Such markers could be predictive in the context of a discussion such as vaccination. To account for sequences of words and characters that might carry useful information, we extracted word unigrams, bigrams, and trigrams as features. Features were coded binary, i.e. set to 1 if a feature is seen in a message and set to 0 otherwise. During training, all features apart from the top 15,000 most frequent ones were removed. Implementation ::: Experimental Set-up ::: Machine Learning We applied two machine learning algorithms with a different perspective on the data: Multinomial Naive Bayes and SVM. The former algorithm is often used on textual data. It models the Bayesian probability of features to belong to a class and makes predictions based on a linear calculation. Features are naively seen as independent of one another BIBREF27. In their simplest form, SVMs are binary linear classifiers that make use of kernels. They search for the optimal hyperplane in the feature space that maximizes the geometric margin between any two classes. The advantage of SVMs is that they provide a solution to a global optimization problem, thereby reducing the generalization error of the classifier BIBREF28. We applied both algorithms by means of the scikit-learn toolkit, a python library that offers implementations of many machine learning algorithms BIBREF29. To cope with imbalance in the number of instances per label, for Multinomial Naive Bayes we set the Alpha parameter to $0.0$ and muted the fit prior. For SVM, we used a linear kernel with the $C$ parameter set to $1.0$ and a balanced class weight. Implementation ::: Experimental Set-up ::: Baselines As baselines, we applied two rule-based sentiment analysis systems for Dutch as well as two random baselines. The first rule-based sentiment analysis system is Pattern, an off-the-shelf sentiment analysis system that makes use of a list of adjectives with a positive or negative weight, based on human annotations BIBREF30. Sentences are assigned a score between $-1.0$ and $1.0$ by multiplying the scores of their adjectives. Bigrams like `horribly good’ are seen as one adjective, where the adjective `horribly’ increases the positivity score of `good’. We translated the polarity score into the discrete labels `Negative’, `Positive’ and `Neutral’ by using the training data to infer which threshold leads to the best performance on the `Negative’ category. The second baseline is the sentiment analysis offered by the social media monitoring dashboard Coosto. As Coosto is a commercial product, there is no public documentation on their sentiment analysis tool. In addition to these two baselines, we applied two random baselines: predicting the negative class randomly for 50% of the messages and predicting the negative class randomly for 15% of the messages. The latter proportion relates to the proportion of vaccination-hesitant tweets in the strictly labeled data on which we test the systems. Implementation ::: Evaluation We evaluate performance by means of ten-fold cross-validation on the strictly labeled data. In each of the folds, 90% of the strictly labeled data is used as training data, which are complemented with the laxly labeled data and/or the data labeled by one annotator, in three of the four training data variants. Performance is always tested on the strict data. As evaluation metrics we calculate the F1-score and the Area Under the ROC Curve (AUC) on predicting the negative stance towards vaccination in the test tweets. Results We trained machine learning (ML) classifiers to distinguish Twitter messages with a negative stance towards vaccination, alternating three aspects of the system: the labels to train on, the composition of the training data and the ML algorithm. The results are presented in Table TABREF15, as the F1-score and AUC of any setting on correctly predicting tweets with a negative stance. Systems with specific combinations of the ML classifier and size of the training data are given in the rows of the table. The four types of labelings are listed in the columns. The results show a tendency for each of the three manipulations. Regarding the ML algorithm, SVM consistently outperforms Naive Bayes for this task. Furthermore, adding additional training data, albeit less reliable, generally improves performance. Training a model on all available data (strict + lax + one) leads to an improvement over using only the strict data, while adding only the laxly labeled data is generally better than using all data. Adding only the data labeled by one annotator often leads to a worse performance. With respect to the labeling, the Polarity-sentiment labeling generally leads to the best outcomes, although the overall best outcome is yielded by training an SVM on Polarity labeling with strict data appended by lax data, at an area under the curve score of $0.66$. The best reported performance is an F1-score of $0.36$ and an AUC of $0.66$. In comparison to the baselines (Table TABREF16), these scores are considerably higher. Nevertheless, there is room for improvement. The performance of the random baselines, with F1-scores of $0.18$ (50%) and $0.13$ (15%), indicates that the minimal performance on this task is rather low. The rule-based sentiment analyses yield better performances, at an F1-score of $0.20$ for Pattern and $0.25$ for Coosto. To analyse the behavior of the best ML system, we present a confusion table of its classifications in Table TABREF17. The Irrelevant category is most often classified with one of the other categories, while the Positive and Negative categories are the biggest confusables. The classifier is possibly identifying features that denote a stance, but struggles to distinguish positive from negative. To gain insight into the potential of increasing the amount of training data, we applied the best ML system (SVM trained on strict and lax data on the polarity labels) on 10% of the strictly labeled data, starting with a small sample of the data and increasing it to all available data (excluding the test data). The learning curve is presented in Figure FIGREF18. It shows an improved performance until the last training data is added, indicating that more training data would likely yield better performance. Results ::: Comparison machine learning and rule-based sentiment analysis A confusion table of the predictions of the best of the two rule-based baselines, Pattern, and the best ML system is displayed in Table TABREF19. Only 192 tweets are labeled by both systems as Negative, while the best ML system accounts for almost double this amount and Pattern for three times as much. Comparing the predictions to the gold standard labeling, 99 of the tweets predicted only by the best ML system as Negative are correct (27%), opposed to 51 that are exclusive to Pattern (8%). Of the tweets that were classified by both as negative, 63 are correct (33%). This shows that the approaches have a rather complementary view on tweets with a negative stance. To gain more insight into the behavior of both approaches, we applied them to 15,577 unlabeled tweets. Table TABREF20 presents a confusion table with the numbers of tweets that were classified as Negative or another category by both approaches. Again, pattern accounts for the majority of negatively labeled messages, and the overlap is small. Two of the authors validated for a sample of 600 messages whether they actually manifested a negative attitude towards vaccination: 200 messages that were uniquely classified by the best ML system as Negative, 200 messages that were solely labeled as Negative by Pattern and 200 messages that were classified by both systems as Negative. This validation showed the same tendency as for the labeled data, with a higher precision of the best ML system in comparison to Pattern (33.5% versus 21% of the messages correctly predicted) and the highest precision when both systems predicted the negative class (36%). The complementary view on tweets with a negative stance between the best ML system and rule-based sentiment analysis becomes clear from their differing predictions. To make this difference concrete, we present a selection of the messages predicted as Negative by both systems in Table TABREF21. The first three are only predicted by the best ML system as Negative, and not by Pattern, while the fourth until the sixth examples are only seen as Negative by Pattern. Where the former give arguments (`can not be compared...’, `kids are dying from it’) or take stance (`I’m opposed to...’), the latter examples display more intensified words and exclamations (`that’s the message!!’, `Arrogant’, `horrific’) and aggression towards a person or organization. The last three tweets are seen by both systems as Negative. They are characterized by intensified words that linked strongly to a negative stance towards vaccination (`dangerous’, `suffering’, `get lost with your compulsory vaccination’). Table TABREF21 also features tweets that were predicted as Negative by neither the best ML-system nor Pattern, representing the most difficult instances of the task. The first two tweets include markers that explicitly point to a negative stance, such as `not been proven' and `vaccinating is nonsense'. The third tweet manifests a negative stance by means of the sarcastic phrase `way to go' (English translation). The use of sarcasm, where typically positive words are used to convey a negative valence, complicates this task of stance prediction. The last tweet advocates an alternative to vaccination, which implicitly can be explained as a negative stance towards vaccination. Such implicitly packaged viewpoints also hamper the prediction of negative stance. Both sarcasm and implicit stance could be addressed by specific modules. Results ::: Improving recall For monitoring the number of Twitter messages over time that are negative towards vaccination, it is arguably more important to detect them at a high recall than at a high precision. False positives (messages incorrectly flagged as Negative) could be filtered manually by a human end user, while False Negatives (messages with a negative stance that are not detected) will be missed. We set out to improve recall, making use of classifier confidence scores and the complementary classifications of Pattern and the best ML system. A first recall-improving approach is to reset the prediction threshold for the Negative category. For any given instance, the SVM classifier estimates the probability of all categories it was trained on. It will predict the Negative category for an instance if its probability exceeds the probabilities of the other categories. This prediction can be altered by changing the threshold; setting the threshold higher will generally mean that fewer instances will be predicted as a Negative category (corresponding to a higher precision), whereas setting it lower will mean more instances will be predicted as such (corresponding to a higher recall). Thus, the balance between precision and recall can be set as desired, to favor one or another. However, in many cases, changing the threshold will not lead to a (strong) increase in overall performance. Figure FIGREF22 presents the balance between recall and precision as a result of predicting the Negative category with the best ML system, when the threshold for this category is altered from lowest to highest. Compared to the standard recall of $0.43$ at a precision of $0.29$, increasing the recall to $0.60$ would lead to a drop of precision to $0.21$. The F1-score would then decrease to $0.31$. A second means by which recall might be improved is to employ ensemble classification. The comparison in the previous section between the best ML method and rule-based sentiment analysis revealed that both systems have a rather disjoint perspective on negative stance: many more tweets are labeled as `Negative' by only one of the two systems than by both. We therefore built an ensemble system that follows both systems in their perspective on tweets with a negative stance: for each tweet, if either of the systems predicts the Negative category, the ensemble system makes this prediction. The performance of the ensemble system is presented in Table TABREF23. Of the 343 tweets in the test set that are labeled as Negative, 210 are retrieved by the ensemble system. The result is a recall of $0.61$. The system does overshoot in its categorization of tweets as Negative: this category is predicted for 1,168 tweets (about 40% of total test set of 2,886 tweets). The result is a precision of $0.18$. In comparison to lowering the prediction threshold of the ML system, the ensemble system thus yields a slightly worse trade-off between precision and recall. Discussion With an F1-score of $0.36$, our system lags behind the $0.75$ F1-score reported by Du et al.BIBREF2. Several factors might have influenced this difference. A first factor is the low proportion of tweets with the label `Negative' in our dataset. In the strict labeling condition, only 343 cases are labeled as negative by two annotators, against 2,543 labeled as positive – the negative cases only comprise 13% of all instances. In the study of Du et al., the anti-vaccination category comprises 24% of all instances (1,445 tweets). More (reliable) examples might have helped in our study to train a better model of negative tweets. Secondly, Du et al. BIBREF2 focused on the English language domain, while we worked with Dutch Twitter messages. The Dutch Twitter realm harbors less data to study than the English one, and might bring forward different discussions when it comes to the topic of vaccination. It could be that the senders' stance towards vaccination is more difficult to pinpoint within these discussions. In line with this language difference, a third prominent factor that might have led to a higher performance in the study of Du et al.BIBREF2 is that they focus on a particular case of vaccination (e.g.: HPV vaccination) and split the anti-vaccination category into several more specific categories that describe the motivation of this stance. The diverse motivations for being against vaccination are indeed reflected in several other studies that focus on identifying discussion communities and viewpoints BIBREF17, BIBREF21, BIBREF19. While splitting the data into more specific categories will lead to less examples per category, it could boost performance on predicting certain categories due to a larger homogeneity. Indeed, the most dominant negative category in the study by Du et al.BIBREF2, dubbed `NegSafety' and occurring in 912 tweets (63% of all negative tweets), yielded the highest F1-score of $0.75$. While two less frequent categories were predicted at an F1-score of $0.0$, this outcome shows the benefit of breaking down the motivations behind a negative stance towards vaccination. A major limitation of our study is that the agreement rates for all categorizations are low. This is also the case in other studies, like BIBREF8, who report an agreement of $K = 0.40$ on polarity categorization. Foremost, this reflects the difficulty of the task. The way in which the stance towards vaccination is manifested in a tweet depends on the author, his or her specific viewpoint, the moment in time at which a tweet was posted, and the possible conversation thread that precedes it. Making a judgment solely based on the text could be difficult without this context. Agreement could possibly be improved by presenting the annotator with the preceding conversation as context to the text. Furthermore, tweets could be coded by more than two annotators. This would give insight into the subtleties of the data, with a graded scale of tweets that clearly manifest a negative stance towards vaccination to tweets that merely hint at such a stance. Such a procedure could likewise help to generate more reliable examples to train a machine learning classifier. The low agreement rates also indicate that measuring stance towards vaccination in tweets is a too difficult task to assign only to a machine. We believe that the human-in-the-loop could be an important asset in any monitoring dashboard that focuses on stance in particular discussions. The system will have an important role in filtering the bigger stream of messages, leaving the human ideally with a controllable set of messages to sift through to end up with reliable statistics on the stance that is seen in the discussion at any point in time. In the analysis section, we explored two approaches to increase recall of messages with a negative stance, which would be most useful in this scenario. Lowering the prediction threshold showed to be most effective to this end. Our primary aim in future work is to improve performance. We did not experiment with different types of features in our current study. Word embeddings might help to include more semantics in our classifier’s model. In addition, domain knowledge could be added by including word lists, and different components might be combined to address different features of the data (e.g.: sarcasm and implicit stance). We also aim to divide the negative category into the specific motivations behind a negative stance towards vaccination, like in the study of Du et al.BIBREF2, so as to obtain more homogeneous categories. Parallel to this new categorization of the data, adding more labeled data appears to be the most effective way to improve our model. The learning curve that we present in Figure FIGREF18 shows that there is no performance plateau reached with the current size of the data. An active learning setting BIBREF31, starting with the current system, could be applied to select additional tweets to annotate. Such a setting could be incorporated in the practical scenario where a human-in-the-loop judges the messages that were flagged as displaying a negative stance by the system. The messages that are judged as correctly and incorrectly predicted could be added as additional reliable training data to improve upon the model. We have installed a dashboard that is catered for such a procedure, starting with the machine learning system that yielded the best performance in our current study. Conclusions We set out to train a classifier to distinguish Twitter messages that display a negative stance towards vaccination from other messages that discuss the topic of vaccination. Based on a set of 8,259 tweets that mention a vaccination-related keyword, annotated for their relevance, stance and sentiment, we tested a multitude of machine learning classifiers, alternating the algorithm, the reliability of training data and the labels to train on. The best performance, with a precision of $0.29$, a recall of $0.43$, an F1-score of $0.36$ and an AUC of $0.66$, was yielded by training an SVM classifier on strictly and laxly labeled data to distinguish irrelevant tweets and polarity categories. The baselines, with an optimal F1-score of $0.25$ (rule-based sentiment analysis), were considerably outperformed. The latter shows the benefit of machine-learned classifiers on domain-specific sentiment: despite being trained on a reasonably small amount of data, the machine-learning approach outperforms general-purpose sentiment analysis tools. Availability and requirements Project name: Prikbord Project home page: http://prikbord.science.ru.nl/ Operating system: Linux Programming language: Python, javascript Other requirements: Django 1.5.11 or higher, MongoDB 2.6.10, pymongo 2.7.2 or higher, requests 2.13.0 or higher License: GNU GPL Any restrictions to use by non-academics: licence needed Abbreviations EMM: Europe Media Monitor MMR: Mumps, Measles, Rubella LDA: Latent Dirichlet Allocation ML: Machine learning SVM: Support Vector Machines AUC: Area under the ROC Curve Clf: Classifier NB: Naive Bayes Pr: Precision Re: Recall Declarations ::: Ethics approval and consent to participate Not applicable. Declarations ::: Consent for publication Not applicable. Declarations ::: Availability of data and materials http://cls.ru.nl/fkunneman/data_stance_vaccination.zip Declarations ::: Competing interests The authors declare that they have no competing interests. Declarations ::: Funding This study has been funded by the Rijksinstituut voor Volksgezondheid en Milieu. Declarations ::: Author's contributions FK has set up the annotations procedure, performed the Machine Learning experiments and analysis, annotated tweets in the analysis and did a major part of the writing. ML has done part of the writing in the Introduction and Conclusion sections. AW has advised on the experimentation and analysis. AB has advised on the experimentation and has edited the complete text. LM has set up the annotations procedure, annotated tweets in the analysis and has done a major part of the writing. All authors read and approved the final manuscript. Declarations ::: Acknowledgements We thank Erik Tjong Kim Sang for the development and support of the http://twiqs.nl service. We also thank the ones who have contributed with annotations.
Relevance and Subject categorizations are annotated at a percent agreement of $0.71$ and $0.70$, their agreement scores are only fair, at $\alpha =0.27$ and $\alpha =0.29$, Stance and Sentiment, which carry more categories than the former two, is $0.54$ for both. Their agreement scores are also fair, at $\alpha =0.35$ and $\alpha =0.34$, This holds for the Relevant category ($0.81$), the Vaccine category ($0.79$) and the Positive category ($0.64$), The Negative category yields a mutual F-score of $0.42$, which is higher than the more frequently annotated categories Neutral ($0.23$) and Not clear ($0.31$).
58ad7e8f7190e2a4f1588cae9a7842c56b37694d
58ad7e8f7190e2a4f1588cae9a7842c56b37694d_0
Q: What is the size of the labelled dataset? Text: Background In the light of increased vaccine hesitance in various countries, consistent monitoring of public beliefs and opinions about the national immunization program is important. Besides performing qualitative research and surveys, real-time monitoring of social media data about vaccination is a valuable tool to this end. The advantage is that one is able to detect and respond to possible vaccine concerns in a timely manner, that it generates continuous data and that it consists of unsolicited, voluntary user-generated content. Several studies that analyse tweets have already been conducted, providing insight in the content that was tweeted most during the 2009 H1N1 outbreak BIBREF0, the information flow between users with a certain sentiment during this outbreak BIBREF1, or trends in tweets that convey, for example, the worries on efficacy of HPV vaccines BIBREF2, BIBREF3. While human coders are best at deploying world knowledge and interpreting the intention behind a text, manual coding of tweets is laborious. The above-mentioned studies therefore aimed at developing and evaluating a system to code tweets automatically. There are several systems in place that make use of this automatic coding. The Vaccine Confidence Project BIBREF4 is a real-time worldwide internet monitor for vaccine concerns. The Europe Media Monitor (EMM) BIBREF5 was installed to support EU institutions and Member State organizations with, for example, the analysis real-time news for medical and health-related topics and with early warning alerts per category and country. MEDISYS, derived from the EMM and developed by the Joint Research Center of the European Commission BIBREF6, is a media monitoring system providing event-based surveillance to rapidly identify potential public health threats based on information from media reports. These systems cannot be used directly for the Netherlands because they do not contain search words in Dutch, are missing an opinion-detection functionality, or do not include categories of the proper specificity. Furthermore, opinions towards vaccination are contextualized by national debates rather than a multinational debate BIBREF7, which implies that a system for monitoring vaccination stance on Twitter should ideally be trained and applied to tweets with a similar language and nationality. Finally, by creating an automatic system for mining public opinions on vaccination concerns, one can continue training and adapting the system. We therefore believe it will be valuable to build our own system. Besides analysing the content of tweets, several other applications that use social media with regard to vaccination have been proposed. They, for example, use data about internet search activity and numbers of tweets as a proxy for (changes in) vaccination coverage or for estimating epidemiological patterns. Huang et al. BIBREF8 found a high positive correlation between reported influenza attitude and behavior on Twitter and influenza vaccination coverage in the US. In contrast, Aquino et al. BIBREF9 found an inverse correlation between Mumps, Measles, Rubella (MMR) vaccination coverage and tweets, Facebook posts and internet search activity about autism and MMR vaccine in Italy. This outcome was possibly due to a decision of the Court of Justice in one of the regions to award vaccine-injury compensation for a case of autism. Wagner, Lampos, Cox and Pebody BIBREF10 assessed the usefulness of geolocated Twitter posts and Google search as source data to model influenza rates, by measuring their fit to the traditional surveillance outcomes and analyzing the data quality. They find that Google search could be a useful alternative to the regular means of surveillance, while Twitter posts are not correlating well due to a lower volume and bias in demographics. Lampos, de Bie and Christianinni BIBREF11 also make use of geolocated Twitter posts to track academics, and present a monitoring tool with a daily flu-score based on weighted keywords. Various studies BIBREF12, BIBREF13, BIBREF14 show that estimates of influenza-like illness symptoms mentioned on Twitter can be exploited to track reported disease levels relatively accurately. However, other studies BIBREF15, BIBREF16 showed that this was only the case when looking at severe cases (e.g. hospitalizations, deaths) or only for the start of the epidemic when interest from journalists was still high. Other research focuses on detecting discussion communities on vaccination in Twitter BIBREF17 or analysing semantic networks BIBREF18 to identify the most relevant and influential users as well as to better understand complex drivers of vaccine hesitancy for public health communication. Tangherlini et al. BIBREF19 explore what can be learned about the vaccination discussion from the realm of “mommy blogs”: parents posting messages about children’s health care on forum websites. They aim to obtain insights in the underlying narrative frameworks, and analyse the topics of the messages using Latent Dirichlet Allocation (LDA) BIBREF20. They find that the most prominent frame is a focus on the exemption of one’s child from receiving a vaccination in school. The motivation against vaccination is most prominently based on personal belief about health, but could also be grounded in religion. Surian et al. BIBREF21 also apply topic modeling to distinguish dominant opinions in the discussion about vaccination, and focus on HPV vaccination as discussed on Twitter. They find a common distinction between tweets reporting on personal experience and tweets that they characterize as `evidence’ (statements of having had a vaccination) and `advocacy’ (statements that support vaccination). Most similar to our work is the study by Du, Xu, Song, Liu and Tao BIBREF2. With the ultimate aim to improve the vaccine uptake, they applied supervised machine learning to analyse the stance towards vaccination as conveyed on social media. Messages were labeled as either related to vaccination or unrelated, and, when related, as ‘positive’, ‘negative’ or ‘neutral’. The ‘negative’ category was further broken down into several considerations, such as ‘safety’ and ‘cost’. After having annotated 6,000 tweets, they trained a classifier on different combinations of features, obtaining the highest macro F1-score (the average of the separate F1-scores for each prediction category) of $0.50$ and micro F1-score (the F1-score over all predictions) of $0.73$. Tweets with a negative stance that point to safety risks could best be predicted, at an optimal F1 score of $0.75$, while the other five sub-categories with a negative stance were predicted at an F1 score below $0.5$ or even $0.0$. Like Du et al. BIBREF2, we focus on analysing sentiment about vaccination using Twitter as a data source and applying supervised machine learning approaches to extract public opinion from tweets automatically. In contrast, in our evaluation we focus on detecting messages with a negative stance in particular. Accurately monitoring such messages helps to recognize discord in an early stage and take appropriate action. We do train machine learning classifiers on modeling other categories than the negative stance, evaluating whether this is beneficial to detecting tweets with a negative stance. For example, we study whether it is beneficial to this task to model tweets with a positive and neutral stance as well. We also inquire whether a more fine-grained categorization of sentiment (e.g.: worry, relief, frustration and informing) offers an advantage. Apart from comparing performance in the context of different categorizations, we compare different machine learning algorithms and compare data with different levels of annotation reliability. Finally, the performance of the resulting systems is compared to regular sentiment analysis common to social media monitoring dashboards. At the public health institute in the Netherlands, we make use of social media monitoring tools offered by Coosto. For defining whether a message is positive, negative or neutral with regard to vaccination, this system makes use of the presence or absence of positive or negative words in the messages. We believe that we could increase the sensitivity and specificity of the sentiment analysis by using supervised machine learning approaches trained on a manually coded dataset. The performance of our machine learning approaches is therefore compared to the sentiment analysis that is currently applied in the Coosto tool. Implementation We set out to curate a corpus of tweets annotated for their stance towards vaccination, and to employ this corpus to train a machine learning classifier to distinguish tweets with a negative stance towards vaccination from other tweets. In the following, we will describe the stages of data acquisition, from collection to labeling. Implementation ::: Data collection We queried Twitter messages that refer to a vaccination-related key term from TwiNL , a database with IDs of Dutch Twitter messages from January 2012 onwards BIBREF22. In contrast to the open Twitter Search API, which only allows one to query tweets posted within the last seven days, TwiNL makes it possible to collect a much larger sample of Twitter posts, ranging several years. We queried TwiNL for different key terms that relate to the topic of vaccination in a five-year period, ranging from January 1, 2012 until February 8, 2017. Query terms that we used were the word `vaccinatie’ (Dutch for `vaccination’) and six other terms closely related to vaccination, with and without a hashtag (`#’). Among the six words is `rijksvaccinatieprogramma’, which refers to the vaccination programme in The Netherlands. An overview of all query terms along with the number of tweets that could be collected based on them is displayed in Table TABREF5. We collected a total of 96,566 tweets from TwiNL, which we filtered in a number of ways. First, retweets were removed, as we wanted to focus on unique messages. This led to a removal of 31% of the messages. Second, we filtered out messages that contain a URL. Such messages often share a news headline and include a URL to refer to the complete news message. As a news headline does not reflect the stance of the person who posted the tweet, we decided to apply this filtering step. It is likely that part of the messages with a URL do include a message composed by the sender itself, but this step helps to clean many unwanted messages. Third, we removed messages that include a word related to animals and traveling (`dier’, animal; `landbouw’, agriculture; and `teek’, tick), as we strictly focus on messages that refer to vaccination that is part of the governmental vaccination program. 27,534 messages were left after filtering. This is the data set that is used for experimentation. Implementation ::: Data annotation The stance towards vaccination was categorized into `Negative’, `Neutral’, `Positive’ and `Not clear’. The latter category was essential, as some posts do not convey enough information about the stance of the writer. In addition to the four-valued stance classes we included separate classes grouped under relevance, subject and sentiment as annotation categories. With these additional categorizations we aimed to obtain a precise grasp of all possibly relevant tweet characteristics in relation to vaccination, which could help in a machine learning setting. The relevance categories were divided into `Relevant’, `Relevant abroad’ and `Irrelevant’. Despite our selection of vaccination-related keywords, tweets that mention these words might not refer to vaccination at all. A word like `vaccine’ might be used in a metaphorical sense, or the tweet might refer to vaccination of animals. The subject categorization was included to describe what the tweet is about primarily: `Vaccine’, `Disease’ or `Both’. We expected that a significant part of the tweets would focus on the severeness of a disease when discussing vaccination. Distinguishing these tweets could help the detection of the stance as well. Finally, the sentiment of tweets was categorized into `Informative’, `Angry/Frustration’, `Worried/Fear/Doubts’, `Relieved’ and `Other’, where the latter category lumps together occasional cases of humor, sarcasm, personal experience, and question raised. These categories were based on the article by BIBREF0, and emerged from analysing their H1N1-related tweets. The `Informative’ category refers to a typical type of message in which information is shared, potentially in support of a negative or positive stance towards vaccination. If the message contained more than one sentiment, the first sentiment identified was chosen. Table TABREF6 shows examples of tweets for the above-mentioned categories. We aimed at a sufficient number of annotated tweets to feed a machine learning classifier with. The majority of tweets were annotated twice. We built an annotation interface catered to the task. Upon being presented with the text of a Twitter post, the annotator was first asked whether the tweet was relevant. In case it was deemed relevant, the tweet could be annotated for the other categorizations. Otherwise, the user could click `OK’ after which he or she was directly presented with a new Twitter post. The annotator was presented with sampled messages that were either not annotated yet or annotated once. We ensured a fairly equal distribution of these two types, so that most tweets would be annotated twice. As annotators, we hired four student assistants and additionally made use of the Radboud Research Participation System. We asked participants to annotate for the duration of an hour, in exchange for a voucher valued ten Euros, or one course credit. Before starting the annotation, the participants were asked to read the annotation manual, with examples and an extensive description of the categories, and were presented with a short training round in which feedback on their annotations was given. The annotation period lasted for six weeks. We stopped when the number of applicants dropped. A total of 8,259 tweets were annotated, of which 6,472 were annotated twice (78%). 65 annotators joined in the study, with an average of $229.5$ annotated tweets per person. The number of annotations per person varied considerably, with $2,388$ tweets coded by the most active annotator. This variation is due to the different ways in which annotators were recruited: student-assistants were recruited for several days, while participants recruited through the Radboud Research Participation System could only join for the duration of an hour. We calculated inter-annotator agreement by Krippendorff's Alpha BIBREF23, which accounts for different annotator pairs and empty values. To also zoom in on the particular agreement by category, we calculated mutual F-scores for each of the categories. This metric is typically used to evaluate system performance by category on gold standard data, but could also be applied to annotation pairs by alternating the roles of the two annotators between classifier and ground truth. A summary of the agreement by categorization is given in Table TABREF10. While both the Relevance and Subject categorizations are annotated at a percent agreement of $0.71$ and $0.70$, their agreement scores are only fair, at $\alpha =0.27$ and $\alpha =0.29$. The percent agreement on Stance and Sentiment, which carry more categories than the former two, is $0.54$ for both. Their agreement scores are also fair, at $\alpha =0.35$ and $\alpha =0.34$. The mutual F-scores show marked differences in agreement by category, where the categories that were annotated most often typically yield a higher score. This holds for the Relevant category ($0.81$), the Vaccine category ($0.79$) and the Positive category ($0.64$). The Negative category yields a mutual F-score of $0.42$, which is higher than the more frequently annotated categories Neutral ($0.23$) and Not clear ($0.31$). We found that these categories are often confused. After combining the annotations of the two, the stance agreement would be increased to $\alpha =0.43$. The rather low agreement over the annotation categories indicates the difficulty of interpreting stance and sentiment in tweets that discuss the topic of vaccination. We therefore proceed with caution to categorize the data for training and testing our models. The agreed upon tweets will form the basis of our experimental data, as was proposed by Jakubiçek, Kovar and Rychly BIBREF24, while the other data is added as additional training material to see if the added quantity is beneficial to performance. We will also annotate a sample of the agreed upon tweets, to make sure that these data are reliable in spite of the low agreement rate. Implementation ::: Data categorization The labeled data that we composed based on the annotated tweets are displayed in Table TABREF11. We combined the Relevant and Relevant abroad categories into one category (`Relevant’), as only a small part of the tweets was annotated as Relevant abroad. We did not make use of the subject annotations, as a small minority of the tweets that were relevant referred a disease only. For the most important categorization, stance, we included all annotated labels. Finally, we combined part of the more frequent sentiment categories with Positive. We distinguish three types of labeled tweets: `strict’, `lax’ and `one’. The strictly labeled tweets were labeled by both annotators with the same label. The lax labels describe tweets that were only annotated with a certain category by one of the coders. The categories were ordered by importance to decide on the lax labels. For instance, in case of the third categorization, Negative was preferred over Positive, followed by Neutral, Not clear and Irrelevant. If one of the annotators labeled a tweet as Positive and the other as Neutral, the lax label for this tweet is Positive. In table TABREF11, the categories are ordered by preference as imposed on the lax labeling. The `one' labeling applies to all tweets that were annotated by only one annotator. Note that the total counts can differ between label categorizations due to the lax labeling: the counts for Positive labels in the Polarity + sentiment labeling (Positive + Frustration, Positive + Information and Positive + other) do not add up to the count of the Positive label in the Polarity labeling. With the `strict’, `lax’ and `one’ labeling, we end up with four variants of data to experiment with: only strict, strict + lax, strict + one and strict + lax + one. The strict data, which are most reliable, are used in all variants. By comparing different combinations of training data, we test whether the addition of less reliably labeled data (lax and/or one) boosts performance. The four labelings have an increasing granularity, where the numbers of examples for the Negative category are stable across each labeling. In the first labeling, these examples are contrasted with any other tweet. It hence comprises a binary classification task. In the second labeling, irrelevant tweets are indicated in a separate category. The Other class here represents all relevant tweets that do not convey a negative stance towards vaccination. In the third labeling, this class is specified as the stance categories Positive, Neutral and Not clear. In the fourth labeling, the Positive category, which is the most frequent polarity class, is further split into `Positive + frustration’, `Positive + Information’ and `Positive + Other’. Positivity about vaccination combined with a frustration sentiment reflects tweets that convey frustration about the arguments of people who are negative about vaccination (e.g.: "I just read that a 17 year old girl died of the measles. Because she did not want an inoculation due to strict religious beliefs. -.- #ridiculous"). The Positive + Information category reflects tweets that provide information in favor of vaccination, or combined with a positive stance towards vaccination (e.g.: "#shingles is especially common with the elderly and chronically diseased. #vaccination can prevent much suffering. #prevention"). In line with Kovár, Rychlý and Jakubíček BIBREF25, we evaluate system performance only on the reliable part of the annotations - the instances labeled with the same label by two annotators. As the overall agreement is not sufficient, with Krippendorff's Alpha ranging between $0.27$ and $0.35$, the first author annotated 300 tweets sampled from the strict data (without knowledge of the annotations) to rule out the possibility that these agreed upon annotations are due to chance agreement. Comparing these new annotations to the original ones, the Negative category and the Positive category are agreed upon at mutual F-scores of $0.70$ and $0.81$. The percent agreement on the binary classification scheme (e.g.: Negative versus Other) is $0.92$, with $\alpha =0.67$, which decreases to $\alpha =0.55$ for the Relevance categorization, $\alpha =0.54$ for the Polarity categorization and $\alpha =0.43$ for the Polarity + Sentiment categorization. We find that instances of a negative and positive stance can be clearly identified by humans, while the labels Neutral and Not Clear are less clear cut. Since it is our focus to model tweets with a negative stance, the agreement on the binary decision between Negative and Other is just sufficient to use for experimentation based on Krippendorff’s BIBREF26 remark that “$\alpha \ge .667$ is the lowest conceivable limit” (p.241). In our experimental set-up we will therefore only evaluate our system performance on distinguishing the Negative category from any other category in the strict data. Implementation ::: Experimental Set-up For each combination of labeling (four types of labeling) and training data (four combinations of training data) we train a machine learning classifier to best distinguish the given labels. Two different classifiers are compared: Multinomial Naive Bayes and Support Vector Machines (SVM). In total, this makes for 32 variants (4 labelings $\times $ 4 combinations of training data $\times $ 2 classifiers). All settings are tested through ten-fold cross-validation on the strict data and are compared against two rule-based sentiment analysis baselines and two random baselines. All components of the experimental set-up are described in more detail below. Implementation ::: Experimental Set-up ::: Preprocessing To properly distinguish word tokens and punctuation we tokenized the tweets by means of Ucto, a rule-based tokenizer with good performance on the Dutch language, and with a configuration specific for Twitter. Tokens were lowercased in order to focus on the content. Punctuation was maintained, as well as emoji and emoticons. Such markers could be predictive in the context of a discussion such as vaccination. To account for sequences of words and characters that might carry useful information, we extracted word unigrams, bigrams, and trigrams as features. Features were coded binary, i.e. set to 1 if a feature is seen in a message and set to 0 otherwise. During training, all features apart from the top 15,000 most frequent ones were removed. Implementation ::: Experimental Set-up ::: Machine Learning We applied two machine learning algorithms with a different perspective on the data: Multinomial Naive Bayes and SVM. The former algorithm is often used on textual data. It models the Bayesian probability of features to belong to a class and makes predictions based on a linear calculation. Features are naively seen as independent of one another BIBREF27. In their simplest form, SVMs are binary linear classifiers that make use of kernels. They search for the optimal hyperplane in the feature space that maximizes the geometric margin between any two classes. The advantage of SVMs is that they provide a solution to a global optimization problem, thereby reducing the generalization error of the classifier BIBREF28. We applied both algorithms by means of the scikit-learn toolkit, a python library that offers implementations of many machine learning algorithms BIBREF29. To cope with imbalance in the number of instances per label, for Multinomial Naive Bayes we set the Alpha parameter to $0.0$ and muted the fit prior. For SVM, we used a linear kernel with the $C$ parameter set to $1.0$ and a balanced class weight. Implementation ::: Experimental Set-up ::: Baselines As baselines, we applied two rule-based sentiment analysis systems for Dutch as well as two random baselines. The first rule-based sentiment analysis system is Pattern, an off-the-shelf sentiment analysis system that makes use of a list of adjectives with a positive or negative weight, based on human annotations BIBREF30. Sentences are assigned a score between $-1.0$ and $1.0$ by multiplying the scores of their adjectives. Bigrams like `horribly good’ are seen as one adjective, where the adjective `horribly’ increases the positivity score of `good’. We translated the polarity score into the discrete labels `Negative’, `Positive’ and `Neutral’ by using the training data to infer which threshold leads to the best performance on the `Negative’ category. The second baseline is the sentiment analysis offered by the social media monitoring dashboard Coosto. As Coosto is a commercial product, there is no public documentation on their sentiment analysis tool. In addition to these two baselines, we applied two random baselines: predicting the negative class randomly for 50% of the messages and predicting the negative class randomly for 15% of the messages. The latter proportion relates to the proportion of vaccination-hesitant tweets in the strictly labeled data on which we test the systems. Implementation ::: Evaluation We evaluate performance by means of ten-fold cross-validation on the strictly labeled data. In each of the folds, 90% of the strictly labeled data is used as training data, which are complemented with the laxly labeled data and/or the data labeled by one annotator, in three of the four training data variants. Performance is always tested on the strict data. As evaluation metrics we calculate the F1-score and the Area Under the ROC Curve (AUC) on predicting the negative stance towards vaccination in the test tweets. Results We trained machine learning (ML) classifiers to distinguish Twitter messages with a negative stance towards vaccination, alternating three aspects of the system: the labels to train on, the composition of the training data and the ML algorithm. The results are presented in Table TABREF15, as the F1-score and AUC of any setting on correctly predicting tweets with a negative stance. Systems with specific combinations of the ML classifier and size of the training data are given in the rows of the table. The four types of labelings are listed in the columns. The results show a tendency for each of the three manipulations. Regarding the ML algorithm, SVM consistently outperforms Naive Bayes for this task. Furthermore, adding additional training data, albeit less reliable, generally improves performance. Training a model on all available data (strict + lax + one) leads to an improvement over using only the strict data, while adding only the laxly labeled data is generally better than using all data. Adding only the data labeled by one annotator often leads to a worse performance. With respect to the labeling, the Polarity-sentiment labeling generally leads to the best outcomes, although the overall best outcome is yielded by training an SVM on Polarity labeling with strict data appended by lax data, at an area under the curve score of $0.66$. The best reported performance is an F1-score of $0.36$ and an AUC of $0.66$. In comparison to the baselines (Table TABREF16), these scores are considerably higher. Nevertheless, there is room for improvement. The performance of the random baselines, with F1-scores of $0.18$ (50%) and $0.13$ (15%), indicates that the minimal performance on this task is rather low. The rule-based sentiment analyses yield better performances, at an F1-score of $0.20$ for Pattern and $0.25$ for Coosto. To analyse the behavior of the best ML system, we present a confusion table of its classifications in Table TABREF17. The Irrelevant category is most often classified with one of the other categories, while the Positive and Negative categories are the biggest confusables. The classifier is possibly identifying features that denote a stance, but struggles to distinguish positive from negative. To gain insight into the potential of increasing the amount of training data, we applied the best ML system (SVM trained on strict and lax data on the polarity labels) on 10% of the strictly labeled data, starting with a small sample of the data and increasing it to all available data (excluding the test data). The learning curve is presented in Figure FIGREF18. It shows an improved performance until the last training data is added, indicating that more training data would likely yield better performance. Results ::: Comparison machine learning and rule-based sentiment analysis A confusion table of the predictions of the best of the two rule-based baselines, Pattern, and the best ML system is displayed in Table TABREF19. Only 192 tweets are labeled by both systems as Negative, while the best ML system accounts for almost double this amount and Pattern for three times as much. Comparing the predictions to the gold standard labeling, 99 of the tweets predicted only by the best ML system as Negative are correct (27%), opposed to 51 that are exclusive to Pattern (8%). Of the tweets that were classified by both as negative, 63 are correct (33%). This shows that the approaches have a rather complementary view on tweets with a negative stance. To gain more insight into the behavior of both approaches, we applied them to 15,577 unlabeled tweets. Table TABREF20 presents a confusion table with the numbers of tweets that were classified as Negative or another category by both approaches. Again, pattern accounts for the majority of negatively labeled messages, and the overlap is small. Two of the authors validated for a sample of 600 messages whether they actually manifested a negative attitude towards vaccination: 200 messages that were uniquely classified by the best ML system as Negative, 200 messages that were solely labeled as Negative by Pattern and 200 messages that were classified by both systems as Negative. This validation showed the same tendency as for the labeled data, with a higher precision of the best ML system in comparison to Pattern (33.5% versus 21% of the messages correctly predicted) and the highest precision when both systems predicted the negative class (36%). The complementary view on tweets with a negative stance between the best ML system and rule-based sentiment analysis becomes clear from their differing predictions. To make this difference concrete, we present a selection of the messages predicted as Negative by both systems in Table TABREF21. The first three are only predicted by the best ML system as Negative, and not by Pattern, while the fourth until the sixth examples are only seen as Negative by Pattern. Where the former give arguments (`can not be compared...’, `kids are dying from it’) or take stance (`I’m opposed to...’), the latter examples display more intensified words and exclamations (`that’s the message!!’, `Arrogant’, `horrific’) and aggression towards a person or organization. The last three tweets are seen by both systems as Negative. They are characterized by intensified words that linked strongly to a negative stance towards vaccination (`dangerous’, `suffering’, `get lost with your compulsory vaccination’). Table TABREF21 also features tweets that were predicted as Negative by neither the best ML-system nor Pattern, representing the most difficult instances of the task. The first two tweets include markers that explicitly point to a negative stance, such as `not been proven' and `vaccinating is nonsense'. The third tweet manifests a negative stance by means of the sarcastic phrase `way to go' (English translation). The use of sarcasm, where typically positive words are used to convey a negative valence, complicates this task of stance prediction. The last tweet advocates an alternative to vaccination, which implicitly can be explained as a negative stance towards vaccination. Such implicitly packaged viewpoints also hamper the prediction of negative stance. Both sarcasm and implicit stance could be addressed by specific modules. Results ::: Improving recall For monitoring the number of Twitter messages over time that are negative towards vaccination, it is arguably more important to detect them at a high recall than at a high precision. False positives (messages incorrectly flagged as Negative) could be filtered manually by a human end user, while False Negatives (messages with a negative stance that are not detected) will be missed. We set out to improve recall, making use of classifier confidence scores and the complementary classifications of Pattern and the best ML system. A first recall-improving approach is to reset the prediction threshold for the Negative category. For any given instance, the SVM classifier estimates the probability of all categories it was trained on. It will predict the Negative category for an instance if its probability exceeds the probabilities of the other categories. This prediction can be altered by changing the threshold; setting the threshold higher will generally mean that fewer instances will be predicted as a Negative category (corresponding to a higher precision), whereas setting it lower will mean more instances will be predicted as such (corresponding to a higher recall). Thus, the balance between precision and recall can be set as desired, to favor one or another. However, in many cases, changing the threshold will not lead to a (strong) increase in overall performance. Figure FIGREF22 presents the balance between recall and precision as a result of predicting the Negative category with the best ML system, when the threshold for this category is altered from lowest to highest. Compared to the standard recall of $0.43$ at a precision of $0.29$, increasing the recall to $0.60$ would lead to a drop of precision to $0.21$. The F1-score would then decrease to $0.31$. A second means by which recall might be improved is to employ ensemble classification. The comparison in the previous section between the best ML method and rule-based sentiment analysis revealed that both systems have a rather disjoint perspective on negative stance: many more tweets are labeled as `Negative' by only one of the two systems than by both. We therefore built an ensemble system that follows both systems in their perspective on tweets with a negative stance: for each tweet, if either of the systems predicts the Negative category, the ensemble system makes this prediction. The performance of the ensemble system is presented in Table TABREF23. Of the 343 tweets in the test set that are labeled as Negative, 210 are retrieved by the ensemble system. The result is a recall of $0.61$. The system does overshoot in its categorization of tweets as Negative: this category is predicted for 1,168 tweets (about 40% of total test set of 2,886 tweets). The result is a precision of $0.18$. In comparison to lowering the prediction threshold of the ML system, the ensemble system thus yields a slightly worse trade-off between precision and recall. Discussion With an F1-score of $0.36$, our system lags behind the $0.75$ F1-score reported by Du et al.BIBREF2. Several factors might have influenced this difference. A first factor is the low proportion of tweets with the label `Negative' in our dataset. In the strict labeling condition, only 343 cases are labeled as negative by two annotators, against 2,543 labeled as positive – the negative cases only comprise 13% of all instances. In the study of Du et al., the anti-vaccination category comprises 24% of all instances (1,445 tweets). More (reliable) examples might have helped in our study to train a better model of negative tweets. Secondly, Du et al. BIBREF2 focused on the English language domain, while we worked with Dutch Twitter messages. The Dutch Twitter realm harbors less data to study than the English one, and might bring forward different discussions when it comes to the topic of vaccination. It could be that the senders' stance towards vaccination is more difficult to pinpoint within these discussions. In line with this language difference, a third prominent factor that might have led to a higher performance in the study of Du et al.BIBREF2 is that they focus on a particular case of vaccination (e.g.: HPV vaccination) and split the anti-vaccination category into several more specific categories that describe the motivation of this stance. The diverse motivations for being against vaccination are indeed reflected in several other studies that focus on identifying discussion communities and viewpoints BIBREF17, BIBREF21, BIBREF19. While splitting the data into more specific categories will lead to less examples per category, it could boost performance on predicting certain categories due to a larger homogeneity. Indeed, the most dominant negative category in the study by Du et al.BIBREF2, dubbed `NegSafety' and occurring in 912 tweets (63% of all negative tweets), yielded the highest F1-score of $0.75$. While two less frequent categories were predicted at an F1-score of $0.0$, this outcome shows the benefit of breaking down the motivations behind a negative stance towards vaccination. A major limitation of our study is that the agreement rates for all categorizations are low. This is also the case in other studies, like BIBREF8, who report an agreement of $K = 0.40$ on polarity categorization. Foremost, this reflects the difficulty of the task. The way in which the stance towards vaccination is manifested in a tweet depends on the author, his or her specific viewpoint, the moment in time at which a tweet was posted, and the possible conversation thread that precedes it. Making a judgment solely based on the text could be difficult without this context. Agreement could possibly be improved by presenting the annotator with the preceding conversation as context to the text. Furthermore, tweets could be coded by more than two annotators. This would give insight into the subtleties of the data, with a graded scale of tweets that clearly manifest a negative stance towards vaccination to tweets that merely hint at such a stance. Such a procedure could likewise help to generate more reliable examples to train a machine learning classifier. The low agreement rates also indicate that measuring stance towards vaccination in tweets is a too difficult task to assign only to a machine. We believe that the human-in-the-loop could be an important asset in any monitoring dashboard that focuses on stance in particular discussions. The system will have an important role in filtering the bigger stream of messages, leaving the human ideally with a controllable set of messages to sift through to end up with reliable statistics on the stance that is seen in the discussion at any point in time. In the analysis section, we explored two approaches to increase recall of messages with a negative stance, which would be most useful in this scenario. Lowering the prediction threshold showed to be most effective to this end. Our primary aim in future work is to improve performance. We did not experiment with different types of features in our current study. Word embeddings might help to include more semantics in our classifier’s model. In addition, domain knowledge could be added by including word lists, and different components might be combined to address different features of the data (e.g.: sarcasm and implicit stance). We also aim to divide the negative category into the specific motivations behind a negative stance towards vaccination, like in the study of Du et al.BIBREF2, so as to obtain more homogeneous categories. Parallel to this new categorization of the data, adding more labeled data appears to be the most effective way to improve our model. The learning curve that we present in Figure FIGREF18 shows that there is no performance plateau reached with the current size of the data. An active learning setting BIBREF31, starting with the current system, could be applied to select additional tweets to annotate. Such a setting could be incorporated in the practical scenario where a human-in-the-loop judges the messages that were flagged as displaying a negative stance by the system. The messages that are judged as correctly and incorrectly predicted could be added as additional reliable training data to improve upon the model. We have installed a dashboard that is catered for such a procedure, starting with the machine learning system that yielded the best performance in our current study. Conclusions We set out to train a classifier to distinguish Twitter messages that display a negative stance towards vaccination from other messages that discuss the topic of vaccination. Based on a set of 8,259 tweets that mention a vaccination-related keyword, annotated for their relevance, stance and sentiment, we tested a multitude of machine learning classifiers, alternating the algorithm, the reliability of training data and the labels to train on. The best performance, with a precision of $0.29$, a recall of $0.43$, an F1-score of $0.36$ and an AUC of $0.66$, was yielded by training an SVM classifier on strictly and laxly labeled data to distinguish irrelevant tweets and polarity categories. The baselines, with an optimal F1-score of $0.25$ (rule-based sentiment analysis), were considerably outperformed. The latter shows the benefit of machine-learned classifiers on domain-specific sentiment: despite being trained on a reasonably small amount of data, the machine-learning approach outperforms general-purpose sentiment analysis tools. Availability and requirements Project name: Prikbord Project home page: http://prikbord.science.ru.nl/ Operating system: Linux Programming language: Python, javascript Other requirements: Django 1.5.11 or higher, MongoDB 2.6.10, pymongo 2.7.2 or higher, requests 2.13.0 or higher License: GNU GPL Any restrictions to use by non-academics: licence needed Abbreviations EMM: Europe Media Monitor MMR: Mumps, Measles, Rubella LDA: Latent Dirichlet Allocation ML: Machine learning SVM: Support Vector Machines AUC: Area under the ROC Curve Clf: Classifier NB: Naive Bayes Pr: Precision Re: Recall Declarations ::: Ethics approval and consent to participate Not applicable. Declarations ::: Consent for publication Not applicable. Declarations ::: Availability of data and materials http://cls.ru.nl/fkunneman/data_stance_vaccination.zip Declarations ::: Competing interests The authors declare that they have no competing interests. Declarations ::: Funding This study has been funded by the Rijksinstituut voor Volksgezondheid en Milieu. Declarations ::: Author's contributions FK has set up the annotations procedure, performed the Machine Learning experiments and analysis, annotated tweets in the analysis and did a major part of the writing. ML has done part of the writing in the Introduction and Conclusion sections. AW has advised on the experimentation and analysis. AB has advised on the experimentation and has edited the complete text. LM has set up the annotations procedure, annotated tweets in the analysis and has done a major part of the writing. All authors read and approved the final manuscript. Declarations ::: Acknowledgements We thank Erik Tjong Kim Sang for the development and support of the http://twiqs.nl service. We also thank the ones who have contributed with annotations.
27,534 messages
12eba1598dca14db64dbc8b73484639363a4618e
12eba1598dca14db64dbc8b73484639363a4618e_0
Q: Which features do they use to model Twitter messages? Text: Background In the light of increased vaccine hesitance in various countries, consistent monitoring of public beliefs and opinions about the national immunization program is important. Besides performing qualitative research and surveys, real-time monitoring of social media data about vaccination is a valuable tool to this end. The advantage is that one is able to detect and respond to possible vaccine concerns in a timely manner, that it generates continuous data and that it consists of unsolicited, voluntary user-generated content. Several studies that analyse tweets have already been conducted, providing insight in the content that was tweeted most during the 2009 H1N1 outbreak BIBREF0, the information flow between users with a certain sentiment during this outbreak BIBREF1, or trends in tweets that convey, for example, the worries on efficacy of HPV vaccines BIBREF2, BIBREF3. While human coders are best at deploying world knowledge and interpreting the intention behind a text, manual coding of tweets is laborious. The above-mentioned studies therefore aimed at developing and evaluating a system to code tweets automatically. There are several systems in place that make use of this automatic coding. The Vaccine Confidence Project BIBREF4 is a real-time worldwide internet monitor for vaccine concerns. The Europe Media Monitor (EMM) BIBREF5 was installed to support EU institutions and Member State organizations with, for example, the analysis real-time news for medical and health-related topics and with early warning alerts per category and country. MEDISYS, derived from the EMM and developed by the Joint Research Center of the European Commission BIBREF6, is a media monitoring system providing event-based surveillance to rapidly identify potential public health threats based on information from media reports. These systems cannot be used directly for the Netherlands because they do not contain search words in Dutch, are missing an opinion-detection functionality, or do not include categories of the proper specificity. Furthermore, opinions towards vaccination are contextualized by national debates rather than a multinational debate BIBREF7, which implies that a system for monitoring vaccination stance on Twitter should ideally be trained and applied to tweets with a similar language and nationality. Finally, by creating an automatic system for mining public opinions on vaccination concerns, one can continue training and adapting the system. We therefore believe it will be valuable to build our own system. Besides analysing the content of tweets, several other applications that use social media with regard to vaccination have been proposed. They, for example, use data about internet search activity and numbers of tweets as a proxy for (changes in) vaccination coverage or for estimating epidemiological patterns. Huang et al. BIBREF8 found a high positive correlation between reported influenza attitude and behavior on Twitter and influenza vaccination coverage in the US. In contrast, Aquino et al. BIBREF9 found an inverse correlation between Mumps, Measles, Rubella (MMR) vaccination coverage and tweets, Facebook posts and internet search activity about autism and MMR vaccine in Italy. This outcome was possibly due to a decision of the Court of Justice in one of the regions to award vaccine-injury compensation for a case of autism. Wagner, Lampos, Cox and Pebody BIBREF10 assessed the usefulness of geolocated Twitter posts and Google search as source data to model influenza rates, by measuring their fit to the traditional surveillance outcomes and analyzing the data quality. They find that Google search could be a useful alternative to the regular means of surveillance, while Twitter posts are not correlating well due to a lower volume and bias in demographics. Lampos, de Bie and Christianinni BIBREF11 also make use of geolocated Twitter posts to track academics, and present a monitoring tool with a daily flu-score based on weighted keywords. Various studies BIBREF12, BIBREF13, BIBREF14 show that estimates of influenza-like illness symptoms mentioned on Twitter can be exploited to track reported disease levels relatively accurately. However, other studies BIBREF15, BIBREF16 showed that this was only the case when looking at severe cases (e.g. hospitalizations, deaths) or only for the start of the epidemic when interest from journalists was still high. Other research focuses on detecting discussion communities on vaccination in Twitter BIBREF17 or analysing semantic networks BIBREF18 to identify the most relevant and influential users as well as to better understand complex drivers of vaccine hesitancy for public health communication. Tangherlini et al. BIBREF19 explore what can be learned about the vaccination discussion from the realm of “mommy blogs”: parents posting messages about children’s health care on forum websites. They aim to obtain insights in the underlying narrative frameworks, and analyse the topics of the messages using Latent Dirichlet Allocation (LDA) BIBREF20. They find that the most prominent frame is a focus on the exemption of one’s child from receiving a vaccination in school. The motivation against vaccination is most prominently based on personal belief about health, but could also be grounded in religion. Surian et al. BIBREF21 also apply topic modeling to distinguish dominant opinions in the discussion about vaccination, and focus on HPV vaccination as discussed on Twitter. They find a common distinction between tweets reporting on personal experience and tweets that they characterize as `evidence’ (statements of having had a vaccination) and `advocacy’ (statements that support vaccination). Most similar to our work is the study by Du, Xu, Song, Liu and Tao BIBREF2. With the ultimate aim to improve the vaccine uptake, they applied supervised machine learning to analyse the stance towards vaccination as conveyed on social media. Messages were labeled as either related to vaccination or unrelated, and, when related, as ‘positive’, ‘negative’ or ‘neutral’. The ‘negative’ category was further broken down into several considerations, such as ‘safety’ and ‘cost’. After having annotated 6,000 tweets, they trained a classifier on different combinations of features, obtaining the highest macro F1-score (the average of the separate F1-scores for each prediction category) of $0.50$ and micro F1-score (the F1-score over all predictions) of $0.73$. Tweets with a negative stance that point to safety risks could best be predicted, at an optimal F1 score of $0.75$, while the other five sub-categories with a negative stance were predicted at an F1 score below $0.5$ or even $0.0$. Like Du et al. BIBREF2, we focus on analysing sentiment about vaccination using Twitter as a data source and applying supervised machine learning approaches to extract public opinion from tweets automatically. In contrast, in our evaluation we focus on detecting messages with a negative stance in particular. Accurately monitoring such messages helps to recognize discord in an early stage and take appropriate action. We do train machine learning classifiers on modeling other categories than the negative stance, evaluating whether this is beneficial to detecting tweets with a negative stance. For example, we study whether it is beneficial to this task to model tweets with a positive and neutral stance as well. We also inquire whether a more fine-grained categorization of sentiment (e.g.: worry, relief, frustration and informing) offers an advantage. Apart from comparing performance in the context of different categorizations, we compare different machine learning algorithms and compare data with different levels of annotation reliability. Finally, the performance of the resulting systems is compared to regular sentiment analysis common to social media monitoring dashboards. At the public health institute in the Netherlands, we make use of social media monitoring tools offered by Coosto. For defining whether a message is positive, negative or neutral with regard to vaccination, this system makes use of the presence or absence of positive or negative words in the messages. We believe that we could increase the sensitivity and specificity of the sentiment analysis by using supervised machine learning approaches trained on a manually coded dataset. The performance of our machine learning approaches is therefore compared to the sentiment analysis that is currently applied in the Coosto tool. Implementation We set out to curate a corpus of tweets annotated for their stance towards vaccination, and to employ this corpus to train a machine learning classifier to distinguish tweets with a negative stance towards vaccination from other tweets. In the following, we will describe the stages of data acquisition, from collection to labeling. Implementation ::: Data collection We queried Twitter messages that refer to a vaccination-related key term from TwiNL , a database with IDs of Dutch Twitter messages from January 2012 onwards BIBREF22. In contrast to the open Twitter Search API, which only allows one to query tweets posted within the last seven days, TwiNL makes it possible to collect a much larger sample of Twitter posts, ranging several years. We queried TwiNL for different key terms that relate to the topic of vaccination in a five-year period, ranging from January 1, 2012 until February 8, 2017. Query terms that we used were the word `vaccinatie’ (Dutch for `vaccination’) and six other terms closely related to vaccination, with and without a hashtag (`#’). Among the six words is `rijksvaccinatieprogramma’, which refers to the vaccination programme in The Netherlands. An overview of all query terms along with the number of tweets that could be collected based on them is displayed in Table TABREF5. We collected a total of 96,566 tweets from TwiNL, which we filtered in a number of ways. First, retweets were removed, as we wanted to focus on unique messages. This led to a removal of 31% of the messages. Second, we filtered out messages that contain a URL. Such messages often share a news headline and include a URL to refer to the complete news message. As a news headline does not reflect the stance of the person who posted the tweet, we decided to apply this filtering step. It is likely that part of the messages with a URL do include a message composed by the sender itself, but this step helps to clean many unwanted messages. Third, we removed messages that include a word related to animals and traveling (`dier’, animal; `landbouw’, agriculture; and `teek’, tick), as we strictly focus on messages that refer to vaccination that is part of the governmental vaccination program. 27,534 messages were left after filtering. This is the data set that is used for experimentation. Implementation ::: Data annotation The stance towards vaccination was categorized into `Negative’, `Neutral’, `Positive’ and `Not clear’. The latter category was essential, as some posts do not convey enough information about the stance of the writer. In addition to the four-valued stance classes we included separate classes grouped under relevance, subject and sentiment as annotation categories. With these additional categorizations we aimed to obtain a precise grasp of all possibly relevant tweet characteristics in relation to vaccination, which could help in a machine learning setting. The relevance categories were divided into `Relevant’, `Relevant abroad’ and `Irrelevant’. Despite our selection of vaccination-related keywords, tweets that mention these words might not refer to vaccination at all. A word like `vaccine’ might be used in a metaphorical sense, or the tweet might refer to vaccination of animals. The subject categorization was included to describe what the tweet is about primarily: `Vaccine’, `Disease’ or `Both’. We expected that a significant part of the tweets would focus on the severeness of a disease when discussing vaccination. Distinguishing these tweets could help the detection of the stance as well. Finally, the sentiment of tweets was categorized into `Informative’, `Angry/Frustration’, `Worried/Fear/Doubts’, `Relieved’ and `Other’, where the latter category lumps together occasional cases of humor, sarcasm, personal experience, and question raised. These categories were based on the article by BIBREF0, and emerged from analysing their H1N1-related tweets. The `Informative’ category refers to a typical type of message in which information is shared, potentially in support of a negative or positive stance towards vaccination. If the message contained more than one sentiment, the first sentiment identified was chosen. Table TABREF6 shows examples of tweets for the above-mentioned categories. We aimed at a sufficient number of annotated tweets to feed a machine learning classifier with. The majority of tweets were annotated twice. We built an annotation interface catered to the task. Upon being presented with the text of a Twitter post, the annotator was first asked whether the tweet was relevant. In case it was deemed relevant, the tweet could be annotated for the other categorizations. Otherwise, the user could click `OK’ after which he or she was directly presented with a new Twitter post. The annotator was presented with sampled messages that were either not annotated yet or annotated once. We ensured a fairly equal distribution of these two types, so that most tweets would be annotated twice. As annotators, we hired four student assistants and additionally made use of the Radboud Research Participation System. We asked participants to annotate for the duration of an hour, in exchange for a voucher valued ten Euros, or one course credit. Before starting the annotation, the participants were asked to read the annotation manual, with examples and an extensive description of the categories, and were presented with a short training round in which feedback on their annotations was given. The annotation period lasted for six weeks. We stopped when the number of applicants dropped. A total of 8,259 tweets were annotated, of which 6,472 were annotated twice (78%). 65 annotators joined in the study, with an average of $229.5$ annotated tweets per person. The number of annotations per person varied considerably, with $2,388$ tweets coded by the most active annotator. This variation is due to the different ways in which annotators were recruited: student-assistants were recruited for several days, while participants recruited through the Radboud Research Participation System could only join for the duration of an hour. We calculated inter-annotator agreement by Krippendorff's Alpha BIBREF23, which accounts for different annotator pairs and empty values. To also zoom in on the particular agreement by category, we calculated mutual F-scores for each of the categories. This metric is typically used to evaluate system performance by category on gold standard data, but could also be applied to annotation pairs by alternating the roles of the two annotators between classifier and ground truth. A summary of the agreement by categorization is given in Table TABREF10. While both the Relevance and Subject categorizations are annotated at a percent agreement of $0.71$ and $0.70$, their agreement scores are only fair, at $\alpha =0.27$ and $\alpha =0.29$. The percent agreement on Stance and Sentiment, which carry more categories than the former two, is $0.54$ for both. Their agreement scores are also fair, at $\alpha =0.35$ and $\alpha =0.34$. The mutual F-scores show marked differences in agreement by category, where the categories that were annotated most often typically yield a higher score. This holds for the Relevant category ($0.81$), the Vaccine category ($0.79$) and the Positive category ($0.64$). The Negative category yields a mutual F-score of $0.42$, which is higher than the more frequently annotated categories Neutral ($0.23$) and Not clear ($0.31$). We found that these categories are often confused. After combining the annotations of the two, the stance agreement would be increased to $\alpha =0.43$. The rather low agreement over the annotation categories indicates the difficulty of interpreting stance and sentiment in tweets that discuss the topic of vaccination. We therefore proceed with caution to categorize the data for training and testing our models. The agreed upon tweets will form the basis of our experimental data, as was proposed by Jakubiçek, Kovar and Rychly BIBREF24, while the other data is added as additional training material to see if the added quantity is beneficial to performance. We will also annotate a sample of the agreed upon tweets, to make sure that these data are reliable in spite of the low agreement rate. Implementation ::: Data categorization The labeled data that we composed based on the annotated tweets are displayed in Table TABREF11. We combined the Relevant and Relevant abroad categories into one category (`Relevant’), as only a small part of the tweets was annotated as Relevant abroad. We did not make use of the subject annotations, as a small minority of the tweets that were relevant referred a disease only. For the most important categorization, stance, we included all annotated labels. Finally, we combined part of the more frequent sentiment categories with Positive. We distinguish three types of labeled tweets: `strict’, `lax’ and `one’. The strictly labeled tweets were labeled by both annotators with the same label. The lax labels describe tweets that were only annotated with a certain category by one of the coders. The categories were ordered by importance to decide on the lax labels. For instance, in case of the third categorization, Negative was preferred over Positive, followed by Neutral, Not clear and Irrelevant. If one of the annotators labeled a tweet as Positive and the other as Neutral, the lax label for this tweet is Positive. In table TABREF11, the categories are ordered by preference as imposed on the lax labeling. The `one' labeling applies to all tweets that were annotated by only one annotator. Note that the total counts can differ between label categorizations due to the lax labeling: the counts for Positive labels in the Polarity + sentiment labeling (Positive + Frustration, Positive + Information and Positive + other) do not add up to the count of the Positive label in the Polarity labeling. With the `strict’, `lax’ and `one’ labeling, we end up with four variants of data to experiment with: only strict, strict + lax, strict + one and strict + lax + one. The strict data, which are most reliable, are used in all variants. By comparing different combinations of training data, we test whether the addition of less reliably labeled data (lax and/or one) boosts performance. The four labelings have an increasing granularity, where the numbers of examples for the Negative category are stable across each labeling. In the first labeling, these examples are contrasted with any other tweet. It hence comprises a binary classification task. In the second labeling, irrelevant tweets are indicated in a separate category. The Other class here represents all relevant tweets that do not convey a negative stance towards vaccination. In the third labeling, this class is specified as the stance categories Positive, Neutral and Not clear. In the fourth labeling, the Positive category, which is the most frequent polarity class, is further split into `Positive + frustration’, `Positive + Information’ and `Positive + Other’. Positivity about vaccination combined with a frustration sentiment reflects tweets that convey frustration about the arguments of people who are negative about vaccination (e.g.: "I just read that a 17 year old girl died of the measles. Because she did not want an inoculation due to strict religious beliefs. -.- #ridiculous"). The Positive + Information category reflects tweets that provide information in favor of vaccination, or combined with a positive stance towards vaccination (e.g.: "#shingles is especially common with the elderly and chronically diseased. #vaccination can prevent much suffering. #prevention"). In line with Kovár, Rychlý and Jakubíček BIBREF25, we evaluate system performance only on the reliable part of the annotations - the instances labeled with the same label by two annotators. As the overall agreement is not sufficient, with Krippendorff's Alpha ranging between $0.27$ and $0.35$, the first author annotated 300 tweets sampled from the strict data (without knowledge of the annotations) to rule out the possibility that these agreed upon annotations are due to chance agreement. Comparing these new annotations to the original ones, the Negative category and the Positive category are agreed upon at mutual F-scores of $0.70$ and $0.81$. The percent agreement on the binary classification scheme (e.g.: Negative versus Other) is $0.92$, with $\alpha =0.67$, which decreases to $\alpha =0.55$ for the Relevance categorization, $\alpha =0.54$ for the Polarity categorization and $\alpha =0.43$ for the Polarity + Sentiment categorization. We find that instances of a negative and positive stance can be clearly identified by humans, while the labels Neutral and Not Clear are less clear cut. Since it is our focus to model tweets with a negative stance, the agreement on the binary decision between Negative and Other is just sufficient to use for experimentation based on Krippendorff’s BIBREF26 remark that “$\alpha \ge .667$ is the lowest conceivable limit” (p.241). In our experimental set-up we will therefore only evaluate our system performance on distinguishing the Negative category from any other category in the strict data. Implementation ::: Experimental Set-up For each combination of labeling (four types of labeling) and training data (four combinations of training data) we train a machine learning classifier to best distinguish the given labels. Two different classifiers are compared: Multinomial Naive Bayes and Support Vector Machines (SVM). In total, this makes for 32 variants (4 labelings $\times $ 4 combinations of training data $\times $ 2 classifiers). All settings are tested through ten-fold cross-validation on the strict data and are compared against two rule-based sentiment analysis baselines and two random baselines. All components of the experimental set-up are described in more detail below. Implementation ::: Experimental Set-up ::: Preprocessing To properly distinguish word tokens and punctuation we tokenized the tweets by means of Ucto, a rule-based tokenizer with good performance on the Dutch language, and with a configuration specific for Twitter. Tokens were lowercased in order to focus on the content. Punctuation was maintained, as well as emoji and emoticons. Such markers could be predictive in the context of a discussion such as vaccination. To account for sequences of words and characters that might carry useful information, we extracted word unigrams, bigrams, and trigrams as features. Features were coded binary, i.e. set to 1 if a feature is seen in a message and set to 0 otherwise. During training, all features apart from the top 15,000 most frequent ones were removed. Implementation ::: Experimental Set-up ::: Machine Learning We applied two machine learning algorithms with a different perspective on the data: Multinomial Naive Bayes and SVM. The former algorithm is often used on textual data. It models the Bayesian probability of features to belong to a class and makes predictions based on a linear calculation. Features are naively seen as independent of one another BIBREF27. In their simplest form, SVMs are binary linear classifiers that make use of kernels. They search for the optimal hyperplane in the feature space that maximizes the geometric margin between any two classes. The advantage of SVMs is that they provide a solution to a global optimization problem, thereby reducing the generalization error of the classifier BIBREF28. We applied both algorithms by means of the scikit-learn toolkit, a python library that offers implementations of many machine learning algorithms BIBREF29. To cope with imbalance in the number of instances per label, for Multinomial Naive Bayes we set the Alpha parameter to $0.0$ and muted the fit prior. For SVM, we used a linear kernel with the $C$ parameter set to $1.0$ and a balanced class weight. Implementation ::: Experimental Set-up ::: Baselines As baselines, we applied two rule-based sentiment analysis systems for Dutch as well as two random baselines. The first rule-based sentiment analysis system is Pattern, an off-the-shelf sentiment analysis system that makes use of a list of adjectives with a positive or negative weight, based on human annotations BIBREF30. Sentences are assigned a score between $-1.0$ and $1.0$ by multiplying the scores of their adjectives. Bigrams like `horribly good’ are seen as one adjective, where the adjective `horribly’ increases the positivity score of `good’. We translated the polarity score into the discrete labels `Negative’, `Positive’ and `Neutral’ by using the training data to infer which threshold leads to the best performance on the `Negative’ category. The second baseline is the sentiment analysis offered by the social media monitoring dashboard Coosto. As Coosto is a commercial product, there is no public documentation on their sentiment analysis tool. In addition to these two baselines, we applied two random baselines: predicting the negative class randomly for 50% of the messages and predicting the negative class randomly for 15% of the messages. The latter proportion relates to the proportion of vaccination-hesitant tweets in the strictly labeled data on which we test the systems. Implementation ::: Evaluation We evaluate performance by means of ten-fold cross-validation on the strictly labeled data. In each of the folds, 90% of the strictly labeled data is used as training data, which are complemented with the laxly labeled data and/or the data labeled by one annotator, in three of the four training data variants. Performance is always tested on the strict data. As evaluation metrics we calculate the F1-score and the Area Under the ROC Curve (AUC) on predicting the negative stance towards vaccination in the test tweets. Results We trained machine learning (ML) classifiers to distinguish Twitter messages with a negative stance towards vaccination, alternating three aspects of the system: the labels to train on, the composition of the training data and the ML algorithm. The results are presented in Table TABREF15, as the F1-score and AUC of any setting on correctly predicting tweets with a negative stance. Systems with specific combinations of the ML classifier and size of the training data are given in the rows of the table. The four types of labelings are listed in the columns. The results show a tendency for each of the three manipulations. Regarding the ML algorithm, SVM consistently outperforms Naive Bayes for this task. Furthermore, adding additional training data, albeit less reliable, generally improves performance. Training a model on all available data (strict + lax + one) leads to an improvement over using only the strict data, while adding only the laxly labeled data is generally better than using all data. Adding only the data labeled by one annotator often leads to a worse performance. With respect to the labeling, the Polarity-sentiment labeling generally leads to the best outcomes, although the overall best outcome is yielded by training an SVM on Polarity labeling with strict data appended by lax data, at an area under the curve score of $0.66$. The best reported performance is an F1-score of $0.36$ and an AUC of $0.66$. In comparison to the baselines (Table TABREF16), these scores are considerably higher. Nevertheless, there is room for improvement. The performance of the random baselines, with F1-scores of $0.18$ (50%) and $0.13$ (15%), indicates that the minimal performance on this task is rather low. The rule-based sentiment analyses yield better performances, at an F1-score of $0.20$ for Pattern and $0.25$ for Coosto. To analyse the behavior of the best ML system, we present a confusion table of its classifications in Table TABREF17. The Irrelevant category is most often classified with one of the other categories, while the Positive and Negative categories are the biggest confusables. The classifier is possibly identifying features that denote a stance, but struggles to distinguish positive from negative. To gain insight into the potential of increasing the amount of training data, we applied the best ML system (SVM trained on strict and lax data on the polarity labels) on 10% of the strictly labeled data, starting with a small sample of the data and increasing it to all available data (excluding the test data). The learning curve is presented in Figure FIGREF18. It shows an improved performance until the last training data is added, indicating that more training data would likely yield better performance. Results ::: Comparison machine learning and rule-based sentiment analysis A confusion table of the predictions of the best of the two rule-based baselines, Pattern, and the best ML system is displayed in Table TABREF19. Only 192 tweets are labeled by both systems as Negative, while the best ML system accounts for almost double this amount and Pattern for three times as much. Comparing the predictions to the gold standard labeling, 99 of the tweets predicted only by the best ML system as Negative are correct (27%), opposed to 51 that are exclusive to Pattern (8%). Of the tweets that were classified by both as negative, 63 are correct (33%). This shows that the approaches have a rather complementary view on tweets with a negative stance. To gain more insight into the behavior of both approaches, we applied them to 15,577 unlabeled tweets. Table TABREF20 presents a confusion table with the numbers of tweets that were classified as Negative or another category by both approaches. Again, pattern accounts for the majority of negatively labeled messages, and the overlap is small. Two of the authors validated for a sample of 600 messages whether they actually manifested a negative attitude towards vaccination: 200 messages that were uniquely classified by the best ML system as Negative, 200 messages that were solely labeled as Negative by Pattern and 200 messages that were classified by both systems as Negative. This validation showed the same tendency as for the labeled data, with a higher precision of the best ML system in comparison to Pattern (33.5% versus 21% of the messages correctly predicted) and the highest precision when both systems predicted the negative class (36%). The complementary view on tweets with a negative stance between the best ML system and rule-based sentiment analysis becomes clear from their differing predictions. To make this difference concrete, we present a selection of the messages predicted as Negative by both systems in Table TABREF21. The first three are only predicted by the best ML system as Negative, and not by Pattern, while the fourth until the sixth examples are only seen as Negative by Pattern. Where the former give arguments (`can not be compared...’, `kids are dying from it’) or take stance (`I’m opposed to...’), the latter examples display more intensified words and exclamations (`that’s the message!!’, `Arrogant’, `horrific’) and aggression towards a person or organization. The last three tweets are seen by both systems as Negative. They are characterized by intensified words that linked strongly to a negative stance towards vaccination (`dangerous’, `suffering’, `get lost with your compulsory vaccination’). Table TABREF21 also features tweets that were predicted as Negative by neither the best ML-system nor Pattern, representing the most difficult instances of the task. The first two tweets include markers that explicitly point to a negative stance, such as `not been proven' and `vaccinating is nonsense'. The third tweet manifests a negative stance by means of the sarcastic phrase `way to go' (English translation). The use of sarcasm, where typically positive words are used to convey a negative valence, complicates this task of stance prediction. The last tweet advocates an alternative to vaccination, which implicitly can be explained as a negative stance towards vaccination. Such implicitly packaged viewpoints also hamper the prediction of negative stance. Both sarcasm and implicit stance could be addressed by specific modules. Results ::: Improving recall For monitoring the number of Twitter messages over time that are negative towards vaccination, it is arguably more important to detect them at a high recall than at a high precision. False positives (messages incorrectly flagged as Negative) could be filtered manually by a human end user, while False Negatives (messages with a negative stance that are not detected) will be missed. We set out to improve recall, making use of classifier confidence scores and the complementary classifications of Pattern and the best ML system. A first recall-improving approach is to reset the prediction threshold for the Negative category. For any given instance, the SVM classifier estimates the probability of all categories it was trained on. It will predict the Negative category for an instance if its probability exceeds the probabilities of the other categories. This prediction can be altered by changing the threshold; setting the threshold higher will generally mean that fewer instances will be predicted as a Negative category (corresponding to a higher precision), whereas setting it lower will mean more instances will be predicted as such (corresponding to a higher recall). Thus, the balance between precision and recall can be set as desired, to favor one or another. However, in many cases, changing the threshold will not lead to a (strong) increase in overall performance. Figure FIGREF22 presents the balance between recall and precision as a result of predicting the Negative category with the best ML system, when the threshold for this category is altered from lowest to highest. Compared to the standard recall of $0.43$ at a precision of $0.29$, increasing the recall to $0.60$ would lead to a drop of precision to $0.21$. The F1-score would then decrease to $0.31$. A second means by which recall might be improved is to employ ensemble classification. The comparison in the previous section between the best ML method and rule-based sentiment analysis revealed that both systems have a rather disjoint perspective on negative stance: many more tweets are labeled as `Negative' by only one of the two systems than by both. We therefore built an ensemble system that follows both systems in their perspective on tweets with a negative stance: for each tweet, if either of the systems predicts the Negative category, the ensemble system makes this prediction. The performance of the ensemble system is presented in Table TABREF23. Of the 343 tweets in the test set that are labeled as Negative, 210 are retrieved by the ensemble system. The result is a recall of $0.61$. The system does overshoot in its categorization of tweets as Negative: this category is predicted for 1,168 tweets (about 40% of total test set of 2,886 tweets). The result is a precision of $0.18$. In comparison to lowering the prediction threshold of the ML system, the ensemble system thus yields a slightly worse trade-off between precision and recall. Discussion With an F1-score of $0.36$, our system lags behind the $0.75$ F1-score reported by Du et al.BIBREF2. Several factors might have influenced this difference. A first factor is the low proportion of tweets with the label `Negative' in our dataset. In the strict labeling condition, only 343 cases are labeled as negative by two annotators, against 2,543 labeled as positive – the negative cases only comprise 13% of all instances. In the study of Du et al., the anti-vaccination category comprises 24% of all instances (1,445 tweets). More (reliable) examples might have helped in our study to train a better model of negative tweets. Secondly, Du et al. BIBREF2 focused on the English language domain, while we worked with Dutch Twitter messages. The Dutch Twitter realm harbors less data to study than the English one, and might bring forward different discussions when it comes to the topic of vaccination. It could be that the senders' stance towards vaccination is more difficult to pinpoint within these discussions. In line with this language difference, a third prominent factor that might have led to a higher performance in the study of Du et al.BIBREF2 is that they focus on a particular case of vaccination (e.g.: HPV vaccination) and split the anti-vaccination category into several more specific categories that describe the motivation of this stance. The diverse motivations for being against vaccination are indeed reflected in several other studies that focus on identifying discussion communities and viewpoints BIBREF17, BIBREF21, BIBREF19. While splitting the data into more specific categories will lead to less examples per category, it could boost performance on predicting certain categories due to a larger homogeneity. Indeed, the most dominant negative category in the study by Du et al.BIBREF2, dubbed `NegSafety' and occurring in 912 tweets (63% of all negative tweets), yielded the highest F1-score of $0.75$. While two less frequent categories were predicted at an F1-score of $0.0$, this outcome shows the benefit of breaking down the motivations behind a negative stance towards vaccination. A major limitation of our study is that the agreement rates for all categorizations are low. This is also the case in other studies, like BIBREF8, who report an agreement of $K = 0.40$ on polarity categorization. Foremost, this reflects the difficulty of the task. The way in which the stance towards vaccination is manifested in a tweet depends on the author, his or her specific viewpoint, the moment in time at which a tweet was posted, and the possible conversation thread that precedes it. Making a judgment solely based on the text could be difficult without this context. Agreement could possibly be improved by presenting the annotator with the preceding conversation as context to the text. Furthermore, tweets could be coded by more than two annotators. This would give insight into the subtleties of the data, with a graded scale of tweets that clearly manifest a negative stance towards vaccination to tweets that merely hint at such a stance. Such a procedure could likewise help to generate more reliable examples to train a machine learning classifier. The low agreement rates also indicate that measuring stance towards vaccination in tweets is a too difficult task to assign only to a machine. We believe that the human-in-the-loop could be an important asset in any monitoring dashboard that focuses on stance in particular discussions. The system will have an important role in filtering the bigger stream of messages, leaving the human ideally with a controllable set of messages to sift through to end up with reliable statistics on the stance that is seen in the discussion at any point in time. In the analysis section, we explored two approaches to increase recall of messages with a negative stance, which would be most useful in this scenario. Lowering the prediction threshold showed to be most effective to this end. Our primary aim in future work is to improve performance. We did not experiment with different types of features in our current study. Word embeddings might help to include more semantics in our classifier’s model. In addition, domain knowledge could be added by including word lists, and different components might be combined to address different features of the data (e.g.: sarcasm and implicit stance). We also aim to divide the negative category into the specific motivations behind a negative stance towards vaccination, like in the study of Du et al.BIBREF2, so as to obtain more homogeneous categories. Parallel to this new categorization of the data, adding more labeled data appears to be the most effective way to improve our model. The learning curve that we present in Figure FIGREF18 shows that there is no performance plateau reached with the current size of the data. An active learning setting BIBREF31, starting with the current system, could be applied to select additional tweets to annotate. Such a setting could be incorporated in the practical scenario where a human-in-the-loop judges the messages that were flagged as displaying a negative stance by the system. The messages that are judged as correctly and incorrectly predicted could be added as additional reliable training data to improve upon the model. We have installed a dashboard that is catered for such a procedure, starting with the machine learning system that yielded the best performance in our current study. Conclusions We set out to train a classifier to distinguish Twitter messages that display a negative stance towards vaccination from other messages that discuss the topic of vaccination. Based on a set of 8,259 tweets that mention a vaccination-related keyword, annotated for their relevance, stance and sentiment, we tested a multitude of machine learning classifiers, alternating the algorithm, the reliability of training data and the labels to train on. The best performance, with a precision of $0.29$, a recall of $0.43$, an F1-score of $0.36$ and an AUC of $0.66$, was yielded by training an SVM classifier on strictly and laxly labeled data to distinguish irrelevant tweets and polarity categories. The baselines, with an optimal F1-score of $0.25$ (rule-based sentiment analysis), were considerably outperformed. The latter shows the benefit of machine-learned classifiers on domain-specific sentiment: despite being trained on a reasonably small amount of data, the machine-learning approach outperforms general-purpose sentiment analysis tools. Availability and requirements Project name: Prikbord Project home page: http://prikbord.science.ru.nl/ Operating system: Linux Programming language: Python, javascript Other requirements: Django 1.5.11 or higher, MongoDB 2.6.10, pymongo 2.7.2 or higher, requests 2.13.0 or higher License: GNU GPL Any restrictions to use by non-academics: licence needed Abbreviations EMM: Europe Media Monitor MMR: Mumps, Measles, Rubella LDA: Latent Dirichlet Allocation ML: Machine learning SVM: Support Vector Machines AUC: Area under the ROC Curve Clf: Classifier NB: Naive Bayes Pr: Precision Re: Recall Declarations ::: Ethics approval and consent to participate Not applicable. Declarations ::: Consent for publication Not applicable. Declarations ::: Availability of data and materials http://cls.ru.nl/fkunneman/data_stance_vaccination.zip Declarations ::: Competing interests The authors declare that they have no competing interests. Declarations ::: Funding This study has been funded by the Rijksinstituut voor Volksgezondheid en Milieu. Declarations ::: Author's contributions FK has set up the annotations procedure, performed the Machine Learning experiments and analysis, annotated tweets in the analysis and did a major part of the writing. ML has done part of the writing in the Introduction and Conclusion sections. AW has advised on the experimentation and analysis. AB has advised on the experimentation and has edited the complete text. LM has set up the annotations procedure, annotated tweets in the analysis and has done a major part of the writing. All authors read and approved the final manuscript. Declarations ::: Acknowledgements We thank Erik Tjong Kim Sang for the development and support of the http://twiqs.nl service. We also thank the ones who have contributed with annotations.
word unigrams, bigrams, and trigrams
4e468ce13b7f6ac05371c62c08c3cec1cd760517
4e468ce13b7f6ac05371c62c08c3cec1cd760517_0
Q: Do they allow for messages with vaccination-related key terms to be of neutral stance? Text: Background In the light of increased vaccine hesitance in various countries, consistent monitoring of public beliefs and opinions about the national immunization program is important. Besides performing qualitative research and surveys, real-time monitoring of social media data about vaccination is a valuable tool to this end. The advantage is that one is able to detect and respond to possible vaccine concerns in a timely manner, that it generates continuous data and that it consists of unsolicited, voluntary user-generated content. Several studies that analyse tweets have already been conducted, providing insight in the content that was tweeted most during the 2009 H1N1 outbreak BIBREF0, the information flow between users with a certain sentiment during this outbreak BIBREF1, or trends in tweets that convey, for example, the worries on efficacy of HPV vaccines BIBREF2, BIBREF3. While human coders are best at deploying world knowledge and interpreting the intention behind a text, manual coding of tweets is laborious. The above-mentioned studies therefore aimed at developing and evaluating a system to code tweets automatically. There are several systems in place that make use of this automatic coding. The Vaccine Confidence Project BIBREF4 is a real-time worldwide internet monitor for vaccine concerns. The Europe Media Monitor (EMM) BIBREF5 was installed to support EU institutions and Member State organizations with, for example, the analysis real-time news for medical and health-related topics and with early warning alerts per category and country. MEDISYS, derived from the EMM and developed by the Joint Research Center of the European Commission BIBREF6, is a media monitoring system providing event-based surveillance to rapidly identify potential public health threats based on information from media reports. These systems cannot be used directly for the Netherlands because they do not contain search words in Dutch, are missing an opinion-detection functionality, or do not include categories of the proper specificity. Furthermore, opinions towards vaccination are contextualized by national debates rather than a multinational debate BIBREF7, which implies that a system for monitoring vaccination stance on Twitter should ideally be trained and applied to tweets with a similar language and nationality. Finally, by creating an automatic system for mining public opinions on vaccination concerns, one can continue training and adapting the system. We therefore believe it will be valuable to build our own system. Besides analysing the content of tweets, several other applications that use social media with regard to vaccination have been proposed. They, for example, use data about internet search activity and numbers of tweets as a proxy for (changes in) vaccination coverage or for estimating epidemiological patterns. Huang et al. BIBREF8 found a high positive correlation between reported influenza attitude and behavior on Twitter and influenza vaccination coverage in the US. In contrast, Aquino et al. BIBREF9 found an inverse correlation between Mumps, Measles, Rubella (MMR) vaccination coverage and tweets, Facebook posts and internet search activity about autism and MMR vaccine in Italy. This outcome was possibly due to a decision of the Court of Justice in one of the regions to award vaccine-injury compensation for a case of autism. Wagner, Lampos, Cox and Pebody BIBREF10 assessed the usefulness of geolocated Twitter posts and Google search as source data to model influenza rates, by measuring their fit to the traditional surveillance outcomes and analyzing the data quality. They find that Google search could be a useful alternative to the regular means of surveillance, while Twitter posts are not correlating well due to a lower volume and bias in demographics. Lampos, de Bie and Christianinni BIBREF11 also make use of geolocated Twitter posts to track academics, and present a monitoring tool with a daily flu-score based on weighted keywords. Various studies BIBREF12, BIBREF13, BIBREF14 show that estimates of influenza-like illness symptoms mentioned on Twitter can be exploited to track reported disease levels relatively accurately. However, other studies BIBREF15, BIBREF16 showed that this was only the case when looking at severe cases (e.g. hospitalizations, deaths) or only for the start of the epidemic when interest from journalists was still high. Other research focuses on detecting discussion communities on vaccination in Twitter BIBREF17 or analysing semantic networks BIBREF18 to identify the most relevant and influential users as well as to better understand complex drivers of vaccine hesitancy for public health communication. Tangherlini et al. BIBREF19 explore what can be learned about the vaccination discussion from the realm of “mommy blogs”: parents posting messages about children’s health care on forum websites. They aim to obtain insights in the underlying narrative frameworks, and analyse the topics of the messages using Latent Dirichlet Allocation (LDA) BIBREF20. They find that the most prominent frame is a focus on the exemption of one’s child from receiving a vaccination in school. The motivation against vaccination is most prominently based on personal belief about health, but could also be grounded in religion. Surian et al. BIBREF21 also apply topic modeling to distinguish dominant opinions in the discussion about vaccination, and focus on HPV vaccination as discussed on Twitter. They find a common distinction between tweets reporting on personal experience and tweets that they characterize as `evidence’ (statements of having had a vaccination) and `advocacy’ (statements that support vaccination). Most similar to our work is the study by Du, Xu, Song, Liu and Tao BIBREF2. With the ultimate aim to improve the vaccine uptake, they applied supervised machine learning to analyse the stance towards vaccination as conveyed on social media. Messages were labeled as either related to vaccination or unrelated, and, when related, as ‘positive’, ‘negative’ or ‘neutral’. The ‘negative’ category was further broken down into several considerations, such as ‘safety’ and ‘cost’. After having annotated 6,000 tweets, they trained a classifier on different combinations of features, obtaining the highest macro F1-score (the average of the separate F1-scores for each prediction category) of $0.50$ and micro F1-score (the F1-score over all predictions) of $0.73$. Tweets with a negative stance that point to safety risks could best be predicted, at an optimal F1 score of $0.75$, while the other five sub-categories with a negative stance were predicted at an F1 score below $0.5$ or even $0.0$. Like Du et al. BIBREF2, we focus on analysing sentiment about vaccination using Twitter as a data source and applying supervised machine learning approaches to extract public opinion from tweets automatically. In contrast, in our evaluation we focus on detecting messages with a negative stance in particular. Accurately monitoring such messages helps to recognize discord in an early stage and take appropriate action. We do train machine learning classifiers on modeling other categories than the negative stance, evaluating whether this is beneficial to detecting tweets with a negative stance. For example, we study whether it is beneficial to this task to model tweets with a positive and neutral stance as well. We also inquire whether a more fine-grained categorization of sentiment (e.g.: worry, relief, frustration and informing) offers an advantage. Apart from comparing performance in the context of different categorizations, we compare different machine learning algorithms and compare data with different levels of annotation reliability. Finally, the performance of the resulting systems is compared to regular sentiment analysis common to social media monitoring dashboards. At the public health institute in the Netherlands, we make use of social media monitoring tools offered by Coosto. For defining whether a message is positive, negative or neutral with regard to vaccination, this system makes use of the presence or absence of positive or negative words in the messages. We believe that we could increase the sensitivity and specificity of the sentiment analysis by using supervised machine learning approaches trained on a manually coded dataset. The performance of our machine learning approaches is therefore compared to the sentiment analysis that is currently applied in the Coosto tool. Implementation We set out to curate a corpus of tweets annotated for their stance towards vaccination, and to employ this corpus to train a machine learning classifier to distinguish tweets with a negative stance towards vaccination from other tweets. In the following, we will describe the stages of data acquisition, from collection to labeling. Implementation ::: Data collection We queried Twitter messages that refer to a vaccination-related key term from TwiNL , a database with IDs of Dutch Twitter messages from January 2012 onwards BIBREF22. In contrast to the open Twitter Search API, which only allows one to query tweets posted within the last seven days, TwiNL makes it possible to collect a much larger sample of Twitter posts, ranging several years. We queried TwiNL for different key terms that relate to the topic of vaccination in a five-year period, ranging from January 1, 2012 until February 8, 2017. Query terms that we used were the word `vaccinatie’ (Dutch for `vaccination’) and six other terms closely related to vaccination, with and without a hashtag (`#’). Among the six words is `rijksvaccinatieprogramma’, which refers to the vaccination programme in The Netherlands. An overview of all query terms along with the number of tweets that could be collected based on them is displayed in Table TABREF5. We collected a total of 96,566 tweets from TwiNL, which we filtered in a number of ways. First, retweets were removed, as we wanted to focus on unique messages. This led to a removal of 31% of the messages. Second, we filtered out messages that contain a URL. Such messages often share a news headline and include a URL to refer to the complete news message. As a news headline does not reflect the stance of the person who posted the tweet, we decided to apply this filtering step. It is likely that part of the messages with a URL do include a message composed by the sender itself, but this step helps to clean many unwanted messages. Third, we removed messages that include a word related to animals and traveling (`dier’, animal; `landbouw’, agriculture; and `teek’, tick), as we strictly focus on messages that refer to vaccination that is part of the governmental vaccination program. 27,534 messages were left after filtering. This is the data set that is used for experimentation. Implementation ::: Data annotation The stance towards vaccination was categorized into `Negative’, `Neutral’, `Positive’ and `Not clear’. The latter category was essential, as some posts do not convey enough information about the stance of the writer. In addition to the four-valued stance classes we included separate classes grouped under relevance, subject and sentiment as annotation categories. With these additional categorizations we aimed to obtain a precise grasp of all possibly relevant tweet characteristics in relation to vaccination, which could help in a machine learning setting. The relevance categories were divided into `Relevant’, `Relevant abroad’ and `Irrelevant’. Despite our selection of vaccination-related keywords, tweets that mention these words might not refer to vaccination at all. A word like `vaccine’ might be used in a metaphorical sense, or the tweet might refer to vaccination of animals. The subject categorization was included to describe what the tweet is about primarily: `Vaccine’, `Disease’ or `Both’. We expected that a significant part of the tweets would focus on the severeness of a disease when discussing vaccination. Distinguishing these tweets could help the detection of the stance as well. Finally, the sentiment of tweets was categorized into `Informative’, `Angry/Frustration’, `Worried/Fear/Doubts’, `Relieved’ and `Other’, where the latter category lumps together occasional cases of humor, sarcasm, personal experience, and question raised. These categories were based on the article by BIBREF0, and emerged from analysing their H1N1-related tweets. The `Informative’ category refers to a typical type of message in which information is shared, potentially in support of a negative or positive stance towards vaccination. If the message contained more than one sentiment, the first sentiment identified was chosen. Table TABREF6 shows examples of tweets for the above-mentioned categories. We aimed at a sufficient number of annotated tweets to feed a machine learning classifier with. The majority of tweets were annotated twice. We built an annotation interface catered to the task. Upon being presented with the text of a Twitter post, the annotator was first asked whether the tweet was relevant. In case it was deemed relevant, the tweet could be annotated for the other categorizations. Otherwise, the user could click `OK’ after which he or she was directly presented with a new Twitter post. The annotator was presented with sampled messages that were either not annotated yet or annotated once. We ensured a fairly equal distribution of these two types, so that most tweets would be annotated twice. As annotators, we hired four student assistants and additionally made use of the Radboud Research Participation System. We asked participants to annotate for the duration of an hour, in exchange for a voucher valued ten Euros, or one course credit. Before starting the annotation, the participants were asked to read the annotation manual, with examples and an extensive description of the categories, and were presented with a short training round in which feedback on their annotations was given. The annotation period lasted for six weeks. We stopped when the number of applicants dropped. A total of 8,259 tweets were annotated, of which 6,472 were annotated twice (78%). 65 annotators joined in the study, with an average of $229.5$ annotated tweets per person. The number of annotations per person varied considerably, with $2,388$ tweets coded by the most active annotator. This variation is due to the different ways in which annotators were recruited: student-assistants were recruited for several days, while participants recruited through the Radboud Research Participation System could only join for the duration of an hour. We calculated inter-annotator agreement by Krippendorff's Alpha BIBREF23, which accounts for different annotator pairs and empty values. To also zoom in on the particular agreement by category, we calculated mutual F-scores for each of the categories. This metric is typically used to evaluate system performance by category on gold standard data, but could also be applied to annotation pairs by alternating the roles of the two annotators between classifier and ground truth. A summary of the agreement by categorization is given in Table TABREF10. While both the Relevance and Subject categorizations are annotated at a percent agreement of $0.71$ and $0.70$, their agreement scores are only fair, at $\alpha =0.27$ and $\alpha =0.29$. The percent agreement on Stance and Sentiment, which carry more categories than the former two, is $0.54$ for both. Their agreement scores are also fair, at $\alpha =0.35$ and $\alpha =0.34$. The mutual F-scores show marked differences in agreement by category, where the categories that were annotated most often typically yield a higher score. This holds for the Relevant category ($0.81$), the Vaccine category ($0.79$) and the Positive category ($0.64$). The Negative category yields a mutual F-score of $0.42$, which is higher than the more frequently annotated categories Neutral ($0.23$) and Not clear ($0.31$). We found that these categories are often confused. After combining the annotations of the two, the stance agreement would be increased to $\alpha =0.43$. The rather low agreement over the annotation categories indicates the difficulty of interpreting stance and sentiment in tweets that discuss the topic of vaccination. We therefore proceed with caution to categorize the data for training and testing our models. The agreed upon tweets will form the basis of our experimental data, as was proposed by Jakubiçek, Kovar and Rychly BIBREF24, while the other data is added as additional training material to see if the added quantity is beneficial to performance. We will also annotate a sample of the agreed upon tweets, to make sure that these data are reliable in spite of the low agreement rate. Implementation ::: Data categorization The labeled data that we composed based on the annotated tweets are displayed in Table TABREF11. We combined the Relevant and Relevant abroad categories into one category (`Relevant’), as only a small part of the tweets was annotated as Relevant abroad. We did not make use of the subject annotations, as a small minority of the tweets that were relevant referred a disease only. For the most important categorization, stance, we included all annotated labels. Finally, we combined part of the more frequent sentiment categories with Positive. We distinguish three types of labeled tweets: `strict’, `lax’ and `one’. The strictly labeled tweets were labeled by both annotators with the same label. The lax labels describe tweets that were only annotated with a certain category by one of the coders. The categories were ordered by importance to decide on the lax labels. For instance, in case of the third categorization, Negative was preferred over Positive, followed by Neutral, Not clear and Irrelevant. If one of the annotators labeled a tweet as Positive and the other as Neutral, the lax label for this tweet is Positive. In table TABREF11, the categories are ordered by preference as imposed on the lax labeling. The `one' labeling applies to all tweets that were annotated by only one annotator. Note that the total counts can differ between label categorizations due to the lax labeling: the counts for Positive labels in the Polarity + sentiment labeling (Positive + Frustration, Positive + Information and Positive + other) do not add up to the count of the Positive label in the Polarity labeling. With the `strict’, `lax’ and `one’ labeling, we end up with four variants of data to experiment with: only strict, strict + lax, strict + one and strict + lax + one. The strict data, which are most reliable, are used in all variants. By comparing different combinations of training data, we test whether the addition of less reliably labeled data (lax and/or one) boosts performance. The four labelings have an increasing granularity, where the numbers of examples for the Negative category are stable across each labeling. In the first labeling, these examples are contrasted with any other tweet. It hence comprises a binary classification task. In the second labeling, irrelevant tweets are indicated in a separate category. The Other class here represents all relevant tweets that do not convey a negative stance towards vaccination. In the third labeling, this class is specified as the stance categories Positive, Neutral and Not clear. In the fourth labeling, the Positive category, which is the most frequent polarity class, is further split into `Positive + frustration’, `Positive + Information’ and `Positive + Other’. Positivity about vaccination combined with a frustration sentiment reflects tweets that convey frustration about the arguments of people who are negative about vaccination (e.g.: "I just read that a 17 year old girl died of the measles. Because she did not want an inoculation due to strict religious beliefs. -.- #ridiculous"). The Positive + Information category reflects tweets that provide information in favor of vaccination, or combined with a positive stance towards vaccination (e.g.: "#shingles is especially common with the elderly and chronically diseased. #vaccination can prevent much suffering. #prevention"). In line with Kovár, Rychlý and Jakubíček BIBREF25, we evaluate system performance only on the reliable part of the annotations - the instances labeled with the same label by two annotators. As the overall agreement is not sufficient, with Krippendorff's Alpha ranging between $0.27$ and $0.35$, the first author annotated 300 tweets sampled from the strict data (without knowledge of the annotations) to rule out the possibility that these agreed upon annotations are due to chance agreement. Comparing these new annotations to the original ones, the Negative category and the Positive category are agreed upon at mutual F-scores of $0.70$ and $0.81$. The percent agreement on the binary classification scheme (e.g.: Negative versus Other) is $0.92$, with $\alpha =0.67$, which decreases to $\alpha =0.55$ for the Relevance categorization, $\alpha =0.54$ for the Polarity categorization and $\alpha =0.43$ for the Polarity + Sentiment categorization. We find that instances of a negative and positive stance can be clearly identified by humans, while the labels Neutral and Not Clear are less clear cut. Since it is our focus to model tweets with a negative stance, the agreement on the binary decision between Negative and Other is just sufficient to use for experimentation based on Krippendorff’s BIBREF26 remark that “$\alpha \ge .667$ is the lowest conceivable limit” (p.241). In our experimental set-up we will therefore only evaluate our system performance on distinguishing the Negative category from any other category in the strict data. Implementation ::: Experimental Set-up For each combination of labeling (four types of labeling) and training data (four combinations of training data) we train a machine learning classifier to best distinguish the given labels. Two different classifiers are compared: Multinomial Naive Bayes and Support Vector Machines (SVM). In total, this makes for 32 variants (4 labelings $\times $ 4 combinations of training data $\times $ 2 classifiers). All settings are tested through ten-fold cross-validation on the strict data and are compared against two rule-based sentiment analysis baselines and two random baselines. All components of the experimental set-up are described in more detail below. Implementation ::: Experimental Set-up ::: Preprocessing To properly distinguish word tokens and punctuation we tokenized the tweets by means of Ucto, a rule-based tokenizer with good performance on the Dutch language, and with a configuration specific for Twitter. Tokens were lowercased in order to focus on the content. Punctuation was maintained, as well as emoji and emoticons. Such markers could be predictive in the context of a discussion such as vaccination. To account for sequences of words and characters that might carry useful information, we extracted word unigrams, bigrams, and trigrams as features. Features were coded binary, i.e. set to 1 if a feature is seen in a message and set to 0 otherwise. During training, all features apart from the top 15,000 most frequent ones were removed. Implementation ::: Experimental Set-up ::: Machine Learning We applied two machine learning algorithms with a different perspective on the data: Multinomial Naive Bayes and SVM. The former algorithm is often used on textual data. It models the Bayesian probability of features to belong to a class and makes predictions based on a linear calculation. Features are naively seen as independent of one another BIBREF27. In their simplest form, SVMs are binary linear classifiers that make use of kernels. They search for the optimal hyperplane in the feature space that maximizes the geometric margin between any two classes. The advantage of SVMs is that they provide a solution to a global optimization problem, thereby reducing the generalization error of the classifier BIBREF28. We applied both algorithms by means of the scikit-learn toolkit, a python library that offers implementations of many machine learning algorithms BIBREF29. To cope with imbalance in the number of instances per label, for Multinomial Naive Bayes we set the Alpha parameter to $0.0$ and muted the fit prior. For SVM, we used a linear kernel with the $C$ parameter set to $1.0$ and a balanced class weight. Implementation ::: Experimental Set-up ::: Baselines As baselines, we applied two rule-based sentiment analysis systems for Dutch as well as two random baselines. The first rule-based sentiment analysis system is Pattern, an off-the-shelf sentiment analysis system that makes use of a list of adjectives with a positive or negative weight, based on human annotations BIBREF30. Sentences are assigned a score between $-1.0$ and $1.0$ by multiplying the scores of their adjectives. Bigrams like `horribly good’ are seen as one adjective, where the adjective `horribly’ increases the positivity score of `good’. We translated the polarity score into the discrete labels `Negative’, `Positive’ and `Neutral’ by using the training data to infer which threshold leads to the best performance on the `Negative’ category. The second baseline is the sentiment analysis offered by the social media monitoring dashboard Coosto. As Coosto is a commercial product, there is no public documentation on their sentiment analysis tool. In addition to these two baselines, we applied two random baselines: predicting the negative class randomly for 50% of the messages and predicting the negative class randomly for 15% of the messages. The latter proportion relates to the proportion of vaccination-hesitant tweets in the strictly labeled data on which we test the systems. Implementation ::: Evaluation We evaluate performance by means of ten-fold cross-validation on the strictly labeled data. In each of the folds, 90% of the strictly labeled data is used as training data, which are complemented with the laxly labeled data and/or the data labeled by one annotator, in three of the four training data variants. Performance is always tested on the strict data. As evaluation metrics we calculate the F1-score and the Area Under the ROC Curve (AUC) on predicting the negative stance towards vaccination in the test tweets. Results We trained machine learning (ML) classifiers to distinguish Twitter messages with a negative stance towards vaccination, alternating three aspects of the system: the labels to train on, the composition of the training data and the ML algorithm. The results are presented in Table TABREF15, as the F1-score and AUC of any setting on correctly predicting tweets with a negative stance. Systems with specific combinations of the ML classifier and size of the training data are given in the rows of the table. The four types of labelings are listed in the columns. The results show a tendency for each of the three manipulations. Regarding the ML algorithm, SVM consistently outperforms Naive Bayes for this task. Furthermore, adding additional training data, albeit less reliable, generally improves performance. Training a model on all available data (strict + lax + one) leads to an improvement over using only the strict data, while adding only the laxly labeled data is generally better than using all data. Adding only the data labeled by one annotator often leads to a worse performance. With respect to the labeling, the Polarity-sentiment labeling generally leads to the best outcomes, although the overall best outcome is yielded by training an SVM on Polarity labeling with strict data appended by lax data, at an area under the curve score of $0.66$. The best reported performance is an F1-score of $0.36$ and an AUC of $0.66$. In comparison to the baselines (Table TABREF16), these scores are considerably higher. Nevertheless, there is room for improvement. The performance of the random baselines, with F1-scores of $0.18$ (50%) and $0.13$ (15%), indicates that the minimal performance on this task is rather low. The rule-based sentiment analyses yield better performances, at an F1-score of $0.20$ for Pattern and $0.25$ for Coosto. To analyse the behavior of the best ML system, we present a confusion table of its classifications in Table TABREF17. The Irrelevant category is most often classified with one of the other categories, while the Positive and Negative categories are the biggest confusables. The classifier is possibly identifying features that denote a stance, but struggles to distinguish positive from negative. To gain insight into the potential of increasing the amount of training data, we applied the best ML system (SVM trained on strict and lax data on the polarity labels) on 10% of the strictly labeled data, starting with a small sample of the data and increasing it to all available data (excluding the test data). The learning curve is presented in Figure FIGREF18. It shows an improved performance until the last training data is added, indicating that more training data would likely yield better performance. Results ::: Comparison machine learning and rule-based sentiment analysis A confusion table of the predictions of the best of the two rule-based baselines, Pattern, and the best ML system is displayed in Table TABREF19. Only 192 tweets are labeled by both systems as Negative, while the best ML system accounts for almost double this amount and Pattern for three times as much. Comparing the predictions to the gold standard labeling, 99 of the tweets predicted only by the best ML system as Negative are correct (27%), opposed to 51 that are exclusive to Pattern (8%). Of the tweets that were classified by both as negative, 63 are correct (33%). This shows that the approaches have a rather complementary view on tweets with a negative stance. To gain more insight into the behavior of both approaches, we applied them to 15,577 unlabeled tweets. Table TABREF20 presents a confusion table with the numbers of tweets that were classified as Negative or another category by both approaches. Again, pattern accounts for the majority of negatively labeled messages, and the overlap is small. Two of the authors validated for a sample of 600 messages whether they actually manifested a negative attitude towards vaccination: 200 messages that were uniquely classified by the best ML system as Negative, 200 messages that were solely labeled as Negative by Pattern and 200 messages that were classified by both systems as Negative. This validation showed the same tendency as for the labeled data, with a higher precision of the best ML system in comparison to Pattern (33.5% versus 21% of the messages correctly predicted) and the highest precision when both systems predicted the negative class (36%). The complementary view on tweets with a negative stance between the best ML system and rule-based sentiment analysis becomes clear from their differing predictions. To make this difference concrete, we present a selection of the messages predicted as Negative by both systems in Table TABREF21. The first three are only predicted by the best ML system as Negative, and not by Pattern, while the fourth until the sixth examples are only seen as Negative by Pattern. Where the former give arguments (`can not be compared...’, `kids are dying from it’) or take stance (`I’m opposed to...’), the latter examples display more intensified words and exclamations (`that’s the message!!’, `Arrogant’, `horrific’) and aggression towards a person or organization. The last three tweets are seen by both systems as Negative. They are characterized by intensified words that linked strongly to a negative stance towards vaccination (`dangerous’, `suffering’, `get lost with your compulsory vaccination’). Table TABREF21 also features tweets that were predicted as Negative by neither the best ML-system nor Pattern, representing the most difficult instances of the task. The first two tweets include markers that explicitly point to a negative stance, such as `not been proven' and `vaccinating is nonsense'. The third tweet manifests a negative stance by means of the sarcastic phrase `way to go' (English translation). The use of sarcasm, where typically positive words are used to convey a negative valence, complicates this task of stance prediction. The last tweet advocates an alternative to vaccination, which implicitly can be explained as a negative stance towards vaccination. Such implicitly packaged viewpoints also hamper the prediction of negative stance. Both sarcasm and implicit stance could be addressed by specific modules. Results ::: Improving recall For monitoring the number of Twitter messages over time that are negative towards vaccination, it is arguably more important to detect them at a high recall than at a high precision. False positives (messages incorrectly flagged as Negative) could be filtered manually by a human end user, while False Negatives (messages with a negative stance that are not detected) will be missed. We set out to improve recall, making use of classifier confidence scores and the complementary classifications of Pattern and the best ML system. A first recall-improving approach is to reset the prediction threshold for the Negative category. For any given instance, the SVM classifier estimates the probability of all categories it was trained on. It will predict the Negative category for an instance if its probability exceeds the probabilities of the other categories. This prediction can be altered by changing the threshold; setting the threshold higher will generally mean that fewer instances will be predicted as a Negative category (corresponding to a higher precision), whereas setting it lower will mean more instances will be predicted as such (corresponding to a higher recall). Thus, the balance between precision and recall can be set as desired, to favor one or another. However, in many cases, changing the threshold will not lead to a (strong) increase in overall performance. Figure FIGREF22 presents the balance between recall and precision as a result of predicting the Negative category with the best ML system, when the threshold for this category is altered from lowest to highest. Compared to the standard recall of $0.43$ at a precision of $0.29$, increasing the recall to $0.60$ would lead to a drop of precision to $0.21$. The F1-score would then decrease to $0.31$. A second means by which recall might be improved is to employ ensemble classification. The comparison in the previous section between the best ML method and rule-based sentiment analysis revealed that both systems have a rather disjoint perspective on negative stance: many more tweets are labeled as `Negative' by only one of the two systems than by both. We therefore built an ensemble system that follows both systems in their perspective on tweets with a negative stance: for each tweet, if either of the systems predicts the Negative category, the ensemble system makes this prediction. The performance of the ensemble system is presented in Table TABREF23. Of the 343 tweets in the test set that are labeled as Negative, 210 are retrieved by the ensemble system. The result is a recall of $0.61$. The system does overshoot in its categorization of tweets as Negative: this category is predicted for 1,168 tweets (about 40% of total test set of 2,886 tweets). The result is a precision of $0.18$. In comparison to lowering the prediction threshold of the ML system, the ensemble system thus yields a slightly worse trade-off between precision and recall. Discussion With an F1-score of $0.36$, our system lags behind the $0.75$ F1-score reported by Du et al.BIBREF2. Several factors might have influenced this difference. A first factor is the low proportion of tweets with the label `Negative' in our dataset. In the strict labeling condition, only 343 cases are labeled as negative by two annotators, against 2,543 labeled as positive – the negative cases only comprise 13% of all instances. In the study of Du et al., the anti-vaccination category comprises 24% of all instances (1,445 tweets). More (reliable) examples might have helped in our study to train a better model of negative tweets. Secondly, Du et al. BIBREF2 focused on the English language domain, while we worked with Dutch Twitter messages. The Dutch Twitter realm harbors less data to study than the English one, and might bring forward different discussions when it comes to the topic of vaccination. It could be that the senders' stance towards vaccination is more difficult to pinpoint within these discussions. In line with this language difference, a third prominent factor that might have led to a higher performance in the study of Du et al.BIBREF2 is that they focus on a particular case of vaccination (e.g.: HPV vaccination) and split the anti-vaccination category into several more specific categories that describe the motivation of this stance. The diverse motivations for being against vaccination are indeed reflected in several other studies that focus on identifying discussion communities and viewpoints BIBREF17, BIBREF21, BIBREF19. While splitting the data into more specific categories will lead to less examples per category, it could boost performance on predicting certain categories due to a larger homogeneity. Indeed, the most dominant negative category in the study by Du et al.BIBREF2, dubbed `NegSafety' and occurring in 912 tweets (63% of all negative tweets), yielded the highest F1-score of $0.75$. While two less frequent categories were predicted at an F1-score of $0.0$, this outcome shows the benefit of breaking down the motivations behind a negative stance towards vaccination. A major limitation of our study is that the agreement rates for all categorizations are low. This is also the case in other studies, like BIBREF8, who report an agreement of $K = 0.40$ on polarity categorization. Foremost, this reflects the difficulty of the task. The way in which the stance towards vaccination is manifested in a tweet depends on the author, his or her specific viewpoint, the moment in time at which a tweet was posted, and the possible conversation thread that precedes it. Making a judgment solely based on the text could be difficult without this context. Agreement could possibly be improved by presenting the annotator with the preceding conversation as context to the text. Furthermore, tweets could be coded by more than two annotators. This would give insight into the subtleties of the data, with a graded scale of tweets that clearly manifest a negative stance towards vaccination to tweets that merely hint at such a stance. Such a procedure could likewise help to generate more reliable examples to train a machine learning classifier. The low agreement rates also indicate that measuring stance towards vaccination in tweets is a too difficult task to assign only to a machine. We believe that the human-in-the-loop could be an important asset in any monitoring dashboard that focuses on stance in particular discussions. The system will have an important role in filtering the bigger stream of messages, leaving the human ideally with a controllable set of messages to sift through to end up with reliable statistics on the stance that is seen in the discussion at any point in time. In the analysis section, we explored two approaches to increase recall of messages with a negative stance, which would be most useful in this scenario. Lowering the prediction threshold showed to be most effective to this end. Our primary aim in future work is to improve performance. We did not experiment with different types of features in our current study. Word embeddings might help to include more semantics in our classifier’s model. In addition, domain knowledge could be added by including word lists, and different components might be combined to address different features of the data (e.g.: sarcasm and implicit stance). We also aim to divide the negative category into the specific motivations behind a negative stance towards vaccination, like in the study of Du et al.BIBREF2, so as to obtain more homogeneous categories. Parallel to this new categorization of the data, adding more labeled data appears to be the most effective way to improve our model. The learning curve that we present in Figure FIGREF18 shows that there is no performance plateau reached with the current size of the data. An active learning setting BIBREF31, starting with the current system, could be applied to select additional tweets to annotate. Such a setting could be incorporated in the practical scenario where a human-in-the-loop judges the messages that were flagged as displaying a negative stance by the system. The messages that are judged as correctly and incorrectly predicted could be added as additional reliable training data to improve upon the model. We have installed a dashboard that is catered for such a procedure, starting with the machine learning system that yielded the best performance in our current study. Conclusions We set out to train a classifier to distinguish Twitter messages that display a negative stance towards vaccination from other messages that discuss the topic of vaccination. Based on a set of 8,259 tweets that mention a vaccination-related keyword, annotated for their relevance, stance and sentiment, we tested a multitude of machine learning classifiers, alternating the algorithm, the reliability of training data and the labels to train on. The best performance, with a precision of $0.29$, a recall of $0.43$, an F1-score of $0.36$ and an AUC of $0.66$, was yielded by training an SVM classifier on strictly and laxly labeled data to distinguish irrelevant tweets and polarity categories. The baselines, with an optimal F1-score of $0.25$ (rule-based sentiment analysis), were considerably outperformed. The latter shows the benefit of machine-learned classifiers on domain-specific sentiment: despite being trained on a reasonably small amount of data, the machine-learning approach outperforms general-purpose sentiment analysis tools. Availability and requirements Project name: Prikbord Project home page: http://prikbord.science.ru.nl/ Operating system: Linux Programming language: Python, javascript Other requirements: Django 1.5.11 or higher, MongoDB 2.6.10, pymongo 2.7.2 or higher, requests 2.13.0 or higher License: GNU GPL Any restrictions to use by non-academics: licence needed Abbreviations EMM: Europe Media Monitor MMR: Mumps, Measles, Rubella LDA: Latent Dirichlet Allocation ML: Machine learning SVM: Support Vector Machines AUC: Area under the ROC Curve Clf: Classifier NB: Naive Bayes Pr: Precision Re: Recall Declarations ::: Ethics approval and consent to participate Not applicable. Declarations ::: Consent for publication Not applicable. Declarations ::: Availability of data and materials http://cls.ru.nl/fkunneman/data_stance_vaccination.zip Declarations ::: Competing interests The authors declare that they have no competing interests. Declarations ::: Funding This study has been funded by the Rijksinstituut voor Volksgezondheid en Milieu. Declarations ::: Author's contributions FK has set up the annotations procedure, performed the Machine Learning experiments and analysis, annotated tweets in the analysis and did a major part of the writing. ML has done part of the writing in the Introduction and Conclusion sections. AW has advised on the experimentation and analysis. AB has advised on the experimentation and has edited the complete text. LM has set up the annotations procedure, annotated tweets in the analysis and has done a major part of the writing. All authors read and approved the final manuscript. Declarations ::: Acknowledgements We thank Erik Tjong Kim Sang for the development and support of the http://twiqs.nl service. We also thank the ones who have contributed with annotations.
Yes
3fb4334e5a4702acd44bd24eb1831bb7e9b98d31
3fb4334e5a4702acd44bd24eb1831bb7e9b98d31_0
Q: How big are the datasets used? Text: Introduction Machine Reading Comprehension (MRC) has been a popular task to test the reading ability of the machine, which requires to read text material and answer the questions based on it. Starting from cloze-style reading comprehension, various neural network approaches have been proposed and massive progresses have been made in creating large-scale datasets and neural models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Though various types of contributions had been made, most works are dealing with English reading comprehension. Reading comprehension in other than English has not been well-addressed mainly due to the lack of large-scale training data. To enrich the training data, there are two traditional approaches. Firstly, we can annotate data by human experts, which is ideal and high-quality, while it is time-consuming and rather expensive. One can also obtain large-scale automatically generated data BIBREF0, BIBREF1, BIBREF6, but the quality is far beyond the usable threshold. Another way is to exploit cross-lingual approaches to utilize the data in rich-resource language to implicitly learn the relations between $<$passage, question, answer$>$. In this paper, we propose the Cross-Lingual Machine Reading Comprehension (CLMRC) task that aims to help reading comprehension in low-resource languages. First, we present several back-translation approaches when there is no or partially available resources in the target language. Then we propose a novel model called Dual BERT to further improve the system performance when there is training data available in the target language. We first translate target language training data into English to form pseudo bilingual parallel data. Then we use multilingual BERT BIBREF7 to simultaneously model the $<$passage, question, answer$>$ in both languages, and fuse the representations of both to generate final predictions. Experimental results on two Chinese reading comprehension dataset CMRC 2018 BIBREF8 and DRCD BIBREF9 show that by utilizing English resources could substantially improve system performance and the proposed Dual BERT achieves state-of-the-art performances on both datasets, and even surpass human performance on some metrics. Also, we conduct experiments on the Japanese and French SQuAD BIBREF10 and achieves substantial improvements. Moreover, detailed ablations and analysis are carried out to demonstrate the effectiveness of exploiting knowledge from rich-resource language. To best of our knowledge, this is the first time that the cross-lingual approaches applied and evaluated on realistic reading comprehension data. The main contributions of our paper can be concluded as follows. [leftmargin=*] We present several back-translation based reading comprehension approaches and yield state-of-the-art performances on several reading comprehension datasets, including Chinese, Japanese, and French. We propose a model called Dual BERT to simultaneously model the $<$passage, question$>$ in both source and target language to enrich the text representations. Experimental results on two public Chinese reading comprehension datasets show that the proposed cross-lingual approaches yield significant improvements over various baseline systems and set new state-of-the-art performances. Related Works Machine Reading Comprehension (MRC) has been a trending research topic in recent years. Among various types of MRC tasks, span-extraction reading comprehension has been enormously popular (such as SQuAD BIBREF4), and we have seen a great progress on related neural network approaches BIBREF11, BIBREF12, BIBREF13, BIBREF3, BIBREF14, especially those were built on pre-trained language models, such as BERT BIBREF7. While massive achievements have been made by the community, reading comprehension in other than English has not been well-studied mainly due to the lack of large-scale training data. BIBREF10 proposed to use runtime machine translation for multilingual extractive reading comprehension. They first translate the data from the target language to English and then obtain an answer using an English reading comprehension model. Finally, they recover the corresponding answer in the original language using soft-alignment attention scores from the NMT model. However, though an interesting attempt has been made, the zero-shot results are quite low, and alignments between different languages, especially for those have different word orders, are significantly different. Also, they only evaluate on a rather small dataset (hundreds of samples) that was translated from SQuAD BIBREF4, which is not that realistic. To solve the issues above and better exploit large-scale rich-resourced reading comprehension data, in this paper, we propose several zero-shot approaches which yield state-of-the-art performances on Japanese and French SQuAD data. Moreover, we also propose a supervised approach for the condition that there are training samples available for the target language. To evaluate the effectiveness of our approach, we carried out experiments on two realistic public Chinese reading comprehension data: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. Experimental results demonstrate the effectiveness by modeling training samples in a bilingual environment. Back-Translation Approaches In this section, we illustrate back-translation approaches for cross-lingual machine reading comprehension, which is natural and easy to implement. Before introducing these approaches in detail, we will clarify crucial terminologies in this paper for better understanding. [leftmargin=*] Source Language: Rich-resourced and has sufficient large-scale training data that we aim to extract knowledge from. We use subscript S for variables in the source language. Target Language: Low-resourced and has only a few training data that we wish to optimize on. We use subscript T for variables in the target language. In this paper, we aim to improve the machine reading comprehension performance in Chinese (target language) by introducing English (source language) resources. The general idea of back-translation approaches is to translate $<$passage, question$>$ pair into the source language and generate an answer using a reading comprehension system in the source language. Finally, the generated answer is back-translated into the target language. In the following subsections, we will introduce several back-translation approaches for cross-lingual machine reading comprehension task. The architectures of the proposed back-translation approaches are depicted in Figure FIGREF5. Back-Translation Approaches ::: GNMT To build a simple cross-lingual machine reading comprehension system, it is straightforward to utilize translation system to bridge source and target language BIBREF10. Briefly, we first translate the target sample to the source language. Then we use a source reading comprehension system, such as BERT BIBREF7, to generate an answer in the source language. Finally, we use back-translation to get the answer in the target language. As we do not exploit any training data in the target language, we could regard this approach as a zero-shot cross-lingual baseline system. Specifically, we use Google Neural Machine Translation (GNMT) system for source-to-target and target-to-source translations. One may also use advanced and domain-specific neural machine translation system to achieve better translation performance, while we leave it for individuals, and this is beyond the scope of this paper. However, for span-extraction reading comprehension task, a major drawback of this approach is that the translated answer may not be the exact span in the target passage. To remedy this, we propose three simple approaches to improve the quality of the translated answer in the target language. Back-Translation Approaches ::: Simple Match We propose a simple approach to align the translated answer into extract span in the target passage. We calculate character-level text overlap (for Chinese) between translated answer $A_{trans}$ and arbitrary sliding window in target passage $\mathcal {P}_{T[i:j]}$. The length of sliding window ranges $len(A_{trans}) \pm \delta $, with a relax parameter $\delta $. Typically, the relax parameter $\delta \in [0,5]$ as the length between ground truth and translated answer does not differ much in length. In this way, we would calculate character-level F1-score of each candidate span $\mathcal {P}_{T[i:j]}$ and translated answer $A_{trans}$, and we could choose the best matching one accordingly. Using the proposed SimpleMatch could ensure the predicted answer is an exact span in target passage. As SimpleMatch does not use target training data either, it could also be a pipeline component in zero-shot settings. Back-Translation Approaches ::: Answer Aligner Though we could use unsupervised approaches for aligning answer, such as the proposed SimpleMatch, it stops at token-level and lacks semantic awareness between the translated answer and ground truth answer. In this paper, we also propose two supervised approaches for further improving the answer span when there is training data available in the target language. The first one is Answer Aligner, where we feed translated answer $\mathcal {A}_{trans}$ and target passage $\mathcal {P}_{T}$ into the BERT and outputs the ground truth answer span $\mathcal {A}_{T}$. The model will learn the semantic relations between them and generate improved span for the target language. Back-Translation Approaches ::: Answer Verifier In Answer Aligner, we did not exploit question information in target training data. One can also utilize question information to transform Answer Aligner into Answer Verifier, as we use complete $\langle \mathcal {P}_T, \mathcal {Q}_T, \mathcal {A}_T \rangle $ in the target language and additional translated answer $\mathcal {A}_{trans}$ to verify its correctness and generate improved span. Dual BERT One disadvantage of the back-translation approaches is that we have to recover the source answer into the target language. To remedy the issue, in this paper, we propose a novel model called Dual BERT to simultaneously model the training data in both source and target language to better exploit the relations among $<$passage, question, answer$>$. The model could be used when there is training data available for the target language, and we could better utilize source language data to enhance the target reading comprehension system. The overall neural architecture for Dual BERT is shown in Figure FIGREF11. Dual BERT ::: Dual Encoder Bidirectional Encoder Representation from Transformers (BERT) has shown marvelous performance in various NLP tasks, which substantially outperforms non-pretrained models by a large margin BIBREF7. In this paper, we use multi-lingual BERT for better encoding the text in both source and target language. Formally, given target passage $\mathcal {P}_{T}$ and question $\mathcal {Q}_{T}$, we organize the input $X_{T}$ for BERT as follows. [CLS] ${\mathcal {Q}_{T}}$ [SEP] ${\mathcal {P}_{T}}$ [SEP] Similarly, we can also obtain source training sample by translating target sample with GNMT, forming input $X_{S}$ for BERT. Then we use $X_{T}$ and $X_{S}$ to obtain deep contextualized representations through a shared multi-lingual BERT, forming $B_{T}\in \mathbb {R}^{L_T*h}, B_{S}\in \mathbb {R}^{L_S*h}$, where $L$ represents the length of input and $h$ is the hidden size (768 for multi-lingual BERT). Dual BERT ::: Bilingual Decoder Typically, in the reading comprehension task, attention mechanism is used to measure the relations between the passage and question. Moreover, as Transformers are fundamental components of BERT, multi-head self-attention layer BIBREF15 is used to extract useful information within the input sequence. Specifically, in our model, to enhance the target representation, we use a multi-head self-attention layer to extract useful information in source BERT representation $B_{S}$. We aim to generate target span by not only relying on target representation but also on source representation to simultaneously consider the $<$passage, question$>$ relations in both languages, which can be seen as a bilingual decoding process. Briefly, we regard target BERT representation $B_T$ as query and source BERT representation $B_S$ as key and value in multi-head attention mechanism. In original multi-head attention, we calculate a raw dot attention as follows. This will result in an attention matrix $A_{TS}$ that indicate raw relations between each source and target token. To combine the benefit of both inter-attention and self-attention, instead of using Equation 1, we propose a simple modification on multi-head attention mechanism, which is called Self-Adaptive Attention (SAA). First, we calculate self-attention of $B_T$ and $B_S$ and apply the softmax function, as shown in Equation 2 and 3. This is designed to use self-attention to filter the irrelevant part within each representation firstly, and inform the raw dot attention on paying more attention to the self-attended part, making the attention more precise and accurate. Then we use self-attention $A_T$ and $A_S$, inter-attention $A_{TS}$ to get self-attentive attention $\tilde{A}_{TS}$. We calculate dot product between ${A}_{ST}$ and $B_S$ to obtain attended representation $R^{\prime } \in \mathbb {R}^{L_T*h}$. After obtaining attended representation $R^{\prime }$, we use an additional fully connected layer with residual layer normalization which is similar to BERT implementation. Finally, we calculate weighted sum of $H_{T}$ to get final span prediction $P_{T}^{s}, P_{T}^{e}$ (superscript $s$ for start, $e$ for end). For example, the start position $P_{T}^{s}$ is calculated by the following equation. We calculate standard cross entropy loss for the start and end predictions in the target language. Dual BERT ::: Auxiliary Output In order to evaluate how translated sample behaves in the source language system, we also generate span prediction for source language using BERT representation $B_S$ directly without further calculation, resulting in the start and target prediction $P_{S}^{s}, P_{S}^{e}$ (similar to Equation 8). Moreover, we also calculate cross-entropy loss $\mathcal {L}_{aux}$ for translated sample (similar to Equation 9), where a $\lambda $ parameter is applied to this loss. Instead of setting $\lambda $ with heuristic value, in this paper, we propose a novel approach to better adjust $\lambda $ automatically. As the sample was generated by the machine translation system, there would be information loss during the translation process. Wrong or partially translated samples may harm the performance of reading comprehension system. To measure how the translated samples assemble the real target samples, we calculate cosine similarity between the ground truth span representation in source and target language (denoted as $\tilde{H}_{S}$ and $\tilde{H}_{T}$). When the ground truth span representation in the translated sample is similar to the real target samples, the $\lambda $ increase; otherwise, we only use target span loss as $\lambda $ may decrease to zero. The span representation is the concatenation of three parts: BERT representation of ground truth start $B^s \in \mathbb {R}^{h} $, ground truth end $B^e \in \mathbb {R}^{h}$, and self-attended span $B^{att} \in \mathbb {R}^{h}$, which considers both boundary information (start/end) and mixed representation of the whole ground truth span. We use BERT representation $B$ to get a self-attended span representation $B^{att}$ using a simple dot product with average pooling, to get a 2D-tensor. The overall loss for Dual BERT is composed by two parts: target span loss $\mathcal {L}_{T}$ and auxiliary span loss in source language $\mathcal {L}_{aux}$. Experiments ::: Experimental Setups We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29. Note that, since the test and challenge sets are preserved by CMRC 2018 official to ensure the integrity of the evaluation process, we submitted our best-performing systems to the organizers to get these scores. The resource in source language was chosen as SQuAD BIBREF4 training data. The settings of the proposed approaches are listed below in detail. [leftmargin=*] Tokenization: Following the official BERT implementation, we use WordPiece tokenizer BIBREF16 for English and character-level tokenizer for Chinese. BERT: We use pre-trained English BERT on SQuAD 1.1 BIBREF4 for initialization, denoted as SQ-$B_{en}$ (base) and SQ-$L_{en}$ (large) for back-translation approaches. For other conditions, we use multi-lingual BERT as default, denoted as $B_{mul}$ (and SQ-$B_{mul}$ for those were pre-trained on SQuAD). Translation: We use Google Neural Machine Translation (GNMT) system for translation. We evaluated GNMT system on NIST MT02/03/04/05/06/08 Chinese-English set and achieved an average BLEU score of 43.24, compared to previous best work (43.20) BIBREF17, yielding state-of-the-art performance. Optimization: Following original BERT implementation, we use Adam with weight decay optimizer BIBREF18 using an initial learning rate of 4e-5 and use cosine learning rate decay scheme instead of the original linear decay, which we found it beneficial for stabilizing results. The training batch size is set to 64, and each model is trained for 2 epochs, which roughly takes 1 hour. Implementation: We modified the TensorFlow BIBREF19 version run_squad.py provided by BERT. All models are trained on Cloud TPU v2 that has 64GB HBM. Experiments ::: Overall Results The overall results are shown in Table TABREF37. As we can see that, without using any alignment approach, the zero-shot results are quite lower regardless of using English BERT-base (#1) or BERT-large (#2). When we apply SimpleMatch (#3), we observe significant improvements demonstrating its effectiveness. The Answer Aligner (#4) could further improve the performance beyond SimpleMatch approach, demonstrating that the machine learning approach could dynamically adjust the span output by learning the semantic relationship between translated answer and target passage. Also, the Answer Verifier (#5) could further boost performance and surpass the multi-lingual BERT baseline (#7) that only use target training data, demonstrating that it is beneficial to adopt rich-resourced language to improve machine reading comprehension in other languages. When we do not use SQuAD pre-trained weights, the proposed Dual BERT (#8) yields significant improvements (all results are verified by p-test with $p<0.05$) over both Chinese BERT (#6) and multi-lingual BERT (#7) by a large margin. If we only train the BERT with SQuAD (#9), which is a zero-shot system, we can see that it achieves decent performance on two Chinese reading comprehension data. Moreover, we can also pursue further improvements by continue training (#10) with Chinese data starting from the system #9, or mixing Chinese data with SQuAD and training from initial multi-lingual BERT (#11). Under powerful SQuAD pre-trained baselines, Dual BERT (#12) still gives moderate and consistent improvements over Cascade Training (#10) and Mixed Training (#11) baselines and set new state-of-the-art performances on both datasets, demonstrating the effectiveness of using machine-translated sample to enhance the Chinese reading comprehension performance. Experiments ::: Results on Japanese and French SQuAD In this paper, we propose a simple but effective approach called SimpleMatch to align translated answer to original passage span. While one may argue that using neural machine translation attention to project source answer to original target passage span is ideal as used in BIBREF10. However, to extract attention value in neural machine translation system and apply it to extract the original passage span is bothersome and computationally ineffective. To demonstrate the effectiveness of using SimpleMatch instead of using NMT attention to extract original passage span in zero-shot condition, we applied SimpleMatch to Japanese and French SQuAD (304 samples for each) which is what exactly used in BIBREF10. The results are listed in Table TABREF40. From the results, we can see that, though our baseline (GNMT+BERT$_{L_{en}}$) is higher than previous work (Back-Translation BIBREF10), when using SimpleMatch to extract original passage span could obtain competitive of even larger improvements. In Japanese SQuAD, the F1 score improved by 9.6 in BIBREF10 using NMT attention, while we obtain larger improvement with 11.8 points demonstrating the effectiveness of the proposed method. BERT with pre-trained SQuAD weights yields the best performance among these systems, as it does not require the machine translation process and has unified text representations for different languages. Experiments ::: Ablation Studies In this section, we ablate important components in our model to explicitly demonstrate its effectiveness. The ablation results are depicted in Table TABREF42. As we can see that, removing SQuAD pre-trained weights (i.e., using randomly initialized BERT) hurts the performance most, suggesting that it is beneficial to use pre-trained weights though the source and the target language is different. Removing source BERT will degenerate to cascade training, and the results show that it also harms overall performance, demonstrating that it is beneficial to utilize translated sample for better characterizing the relations between $<$passage, question, answer$>$. The other modifications seem to also consistently decrease the performance to some extent, but not as salient as the data-related components (last two lines), indicating that data-related approaches are important in cross-lingual machine reading comprehension task. Discussion In our preliminary cross-lingual experiments, we adopt English as our source language data. However, one question remains unclear. Is it better to pre-train with larger data in a distant language (such as English, as oppose to Simplified Chinese), or with smaller data in closer language (such as Traditional Chinese)? To investigate the problem, we plot the multi-lingual BERT performance on the CMRC 2018 development data using different language and data size in the pre-training stage. The results are depicted in Figure FIGREF43, and we come to several observations. Firstly, when the size of pre-training data is under 25k (training data size of DRCD), we can see that there is no much difference whether we use Chinese or English data for pre-training, and even the English pre-trained models are better than Chinese pre-trained models in most of the times, which is not expected. We suspect that, by using multi-lingual BERT, the model tend to provide universal representations for the text and learn the language-independent semantic relations among the inputs which is ideal for cross-lingual tasks, thus the model is not that sensitive to the language in the pre-training stage. Also, as training data size of SQuAD is larger than DRCD, we could use more data for pre-training. When we add more SQuAD data ($>$25k) in the pre-training stage, the performance on the downstream task (CMRC 2018) continues to improve significantly. In this context, we conclude that, When the pre-training data is not abundant, there is no special preference on the selection of source (pre-training) language. If there are large-scale training data available for several languages, we should select the source language as the one that has the largest training data rather than its linguistic similarity to the target language. Furthermore, one could also take advantages of data in various languages, but not only in a bilingual environment, to further exploit knowledge from various sources, which is beyond the scope of this paper and we leave this for future work. Conclusion In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task. When there is no training data available for the target language, firstly, we provide several zero-shot approaches that were initially trained on English and transfer to other languages, along with three methods to improve the translated answer span by using unsupervised and supervised approaches. When there is training data available for the target language, we propose a novel model called Dual BERT to simultaneously model the $<$passage, question, answer$>$ in source and target languages using multi-lingual BERT. The proposed method takes advantage of the large-scale training data by rich-resource language (such as SQuAD) and learns the semantic relations between the passage and question in both source and target language. Experiments on two Chinese machine reading comprehension datasets indicate that the proposed model could give consistent and significant improvements over various state-of-the-art systems by a large margin and set baselines for future research on CLMRC task. Future studies on cross-lingual machine reading comprehension will focus on 1) how to utilize various types of English reading comprehension data; 2) cross-lingual machine reading comprehension without the translation process, etc. Acknowledgments We would like to thank all anonymous reviewers for their thorough reviewing and providing constructive comments to improve our paper. The first author was partially supported by the Google TensorFlow Research Cloud (TFRC) program for Cloud TPU access. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011, and 61772153.
Evaluation datasets used: CMRC 2018 - 18939 questions, 10 answers DRCD - 33953 questions, 5 answers NIST MT02/03/04/05/06/08 Chinese-English - Not specified Source language train data: SQuAD - Not specified
a9acd1af4a869c17b95ec489cdb1ba7d76715ea4
a9acd1af4a869c17b95ec489cdb1ba7d76715ea4_0
Q: Is this a span-based (extractive) QA task? Text: Introduction Machine Reading Comprehension (MRC) has been a popular task to test the reading ability of the machine, which requires to read text material and answer the questions based on it. Starting from cloze-style reading comprehension, various neural network approaches have been proposed and massive progresses have been made in creating large-scale datasets and neural models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Though various types of contributions had been made, most works are dealing with English reading comprehension. Reading comprehension in other than English has not been well-addressed mainly due to the lack of large-scale training data. To enrich the training data, there are two traditional approaches. Firstly, we can annotate data by human experts, which is ideal and high-quality, while it is time-consuming and rather expensive. One can also obtain large-scale automatically generated data BIBREF0, BIBREF1, BIBREF6, but the quality is far beyond the usable threshold. Another way is to exploit cross-lingual approaches to utilize the data in rich-resource language to implicitly learn the relations between $<$passage, question, answer$>$. In this paper, we propose the Cross-Lingual Machine Reading Comprehension (CLMRC) task that aims to help reading comprehension in low-resource languages. First, we present several back-translation approaches when there is no or partially available resources in the target language. Then we propose a novel model called Dual BERT to further improve the system performance when there is training data available in the target language. We first translate target language training data into English to form pseudo bilingual parallel data. Then we use multilingual BERT BIBREF7 to simultaneously model the $<$passage, question, answer$>$ in both languages, and fuse the representations of both to generate final predictions. Experimental results on two Chinese reading comprehension dataset CMRC 2018 BIBREF8 and DRCD BIBREF9 show that by utilizing English resources could substantially improve system performance and the proposed Dual BERT achieves state-of-the-art performances on both datasets, and even surpass human performance on some metrics. Also, we conduct experiments on the Japanese and French SQuAD BIBREF10 and achieves substantial improvements. Moreover, detailed ablations and analysis are carried out to demonstrate the effectiveness of exploiting knowledge from rich-resource language. To best of our knowledge, this is the first time that the cross-lingual approaches applied and evaluated on realistic reading comprehension data. The main contributions of our paper can be concluded as follows. [leftmargin=*] We present several back-translation based reading comprehension approaches and yield state-of-the-art performances on several reading comprehension datasets, including Chinese, Japanese, and French. We propose a model called Dual BERT to simultaneously model the $<$passage, question$>$ in both source and target language to enrich the text representations. Experimental results on two public Chinese reading comprehension datasets show that the proposed cross-lingual approaches yield significant improvements over various baseline systems and set new state-of-the-art performances. Related Works Machine Reading Comprehension (MRC) has been a trending research topic in recent years. Among various types of MRC tasks, span-extraction reading comprehension has been enormously popular (such as SQuAD BIBREF4), and we have seen a great progress on related neural network approaches BIBREF11, BIBREF12, BIBREF13, BIBREF3, BIBREF14, especially those were built on pre-trained language models, such as BERT BIBREF7. While massive achievements have been made by the community, reading comprehension in other than English has not been well-studied mainly due to the lack of large-scale training data. BIBREF10 proposed to use runtime machine translation for multilingual extractive reading comprehension. They first translate the data from the target language to English and then obtain an answer using an English reading comprehension model. Finally, they recover the corresponding answer in the original language using soft-alignment attention scores from the NMT model. However, though an interesting attempt has been made, the zero-shot results are quite low, and alignments between different languages, especially for those have different word orders, are significantly different. Also, they only evaluate on a rather small dataset (hundreds of samples) that was translated from SQuAD BIBREF4, which is not that realistic. To solve the issues above and better exploit large-scale rich-resourced reading comprehension data, in this paper, we propose several zero-shot approaches which yield state-of-the-art performances on Japanese and French SQuAD data. Moreover, we also propose a supervised approach for the condition that there are training samples available for the target language. To evaluate the effectiveness of our approach, we carried out experiments on two realistic public Chinese reading comprehension data: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. Experimental results demonstrate the effectiveness by modeling training samples in a bilingual environment. Back-Translation Approaches In this section, we illustrate back-translation approaches for cross-lingual machine reading comprehension, which is natural and easy to implement. Before introducing these approaches in detail, we will clarify crucial terminologies in this paper for better understanding. [leftmargin=*] Source Language: Rich-resourced and has sufficient large-scale training data that we aim to extract knowledge from. We use subscript S for variables in the source language. Target Language: Low-resourced and has only a few training data that we wish to optimize on. We use subscript T for variables in the target language. In this paper, we aim to improve the machine reading comprehension performance in Chinese (target language) by introducing English (source language) resources. The general idea of back-translation approaches is to translate $<$passage, question$>$ pair into the source language and generate an answer using a reading comprehension system in the source language. Finally, the generated answer is back-translated into the target language. In the following subsections, we will introduce several back-translation approaches for cross-lingual machine reading comprehension task. The architectures of the proposed back-translation approaches are depicted in Figure FIGREF5. Back-Translation Approaches ::: GNMT To build a simple cross-lingual machine reading comprehension system, it is straightforward to utilize translation system to bridge source and target language BIBREF10. Briefly, we first translate the target sample to the source language. Then we use a source reading comprehension system, such as BERT BIBREF7, to generate an answer in the source language. Finally, we use back-translation to get the answer in the target language. As we do not exploit any training data in the target language, we could regard this approach as a zero-shot cross-lingual baseline system. Specifically, we use Google Neural Machine Translation (GNMT) system for source-to-target and target-to-source translations. One may also use advanced and domain-specific neural machine translation system to achieve better translation performance, while we leave it for individuals, and this is beyond the scope of this paper. However, for span-extraction reading comprehension task, a major drawback of this approach is that the translated answer may not be the exact span in the target passage. To remedy this, we propose three simple approaches to improve the quality of the translated answer in the target language. Back-Translation Approaches ::: Simple Match We propose a simple approach to align the translated answer into extract span in the target passage. We calculate character-level text overlap (for Chinese) between translated answer $A_{trans}$ and arbitrary sliding window in target passage $\mathcal {P}_{T[i:j]}$. The length of sliding window ranges $len(A_{trans}) \pm \delta $, with a relax parameter $\delta $. Typically, the relax parameter $\delta \in [0,5]$ as the length between ground truth and translated answer does not differ much in length. In this way, we would calculate character-level F1-score of each candidate span $\mathcal {P}_{T[i:j]}$ and translated answer $A_{trans}$, and we could choose the best matching one accordingly. Using the proposed SimpleMatch could ensure the predicted answer is an exact span in target passage. As SimpleMatch does not use target training data either, it could also be a pipeline component in zero-shot settings. Back-Translation Approaches ::: Answer Aligner Though we could use unsupervised approaches for aligning answer, such as the proposed SimpleMatch, it stops at token-level and lacks semantic awareness between the translated answer and ground truth answer. In this paper, we also propose two supervised approaches for further improving the answer span when there is training data available in the target language. The first one is Answer Aligner, where we feed translated answer $\mathcal {A}_{trans}$ and target passage $\mathcal {P}_{T}$ into the BERT and outputs the ground truth answer span $\mathcal {A}_{T}$. The model will learn the semantic relations between them and generate improved span for the target language. Back-Translation Approaches ::: Answer Verifier In Answer Aligner, we did not exploit question information in target training data. One can also utilize question information to transform Answer Aligner into Answer Verifier, as we use complete $\langle \mathcal {P}_T, \mathcal {Q}_T, \mathcal {A}_T \rangle $ in the target language and additional translated answer $\mathcal {A}_{trans}$ to verify its correctness and generate improved span. Dual BERT One disadvantage of the back-translation approaches is that we have to recover the source answer into the target language. To remedy the issue, in this paper, we propose a novel model called Dual BERT to simultaneously model the training data in both source and target language to better exploit the relations among $<$passage, question, answer$>$. The model could be used when there is training data available for the target language, and we could better utilize source language data to enhance the target reading comprehension system. The overall neural architecture for Dual BERT is shown in Figure FIGREF11. Dual BERT ::: Dual Encoder Bidirectional Encoder Representation from Transformers (BERT) has shown marvelous performance in various NLP tasks, which substantially outperforms non-pretrained models by a large margin BIBREF7. In this paper, we use multi-lingual BERT for better encoding the text in both source and target language. Formally, given target passage $\mathcal {P}_{T}$ and question $\mathcal {Q}_{T}$, we organize the input $X_{T}$ for BERT as follows. [CLS] ${\mathcal {Q}_{T}}$ [SEP] ${\mathcal {P}_{T}}$ [SEP] Similarly, we can also obtain source training sample by translating target sample with GNMT, forming input $X_{S}$ for BERT. Then we use $X_{T}$ and $X_{S}$ to obtain deep contextualized representations through a shared multi-lingual BERT, forming $B_{T}\in \mathbb {R}^{L_T*h}, B_{S}\in \mathbb {R}^{L_S*h}$, where $L$ represents the length of input and $h$ is the hidden size (768 for multi-lingual BERT). Dual BERT ::: Bilingual Decoder Typically, in the reading comprehension task, attention mechanism is used to measure the relations between the passage and question. Moreover, as Transformers are fundamental components of BERT, multi-head self-attention layer BIBREF15 is used to extract useful information within the input sequence. Specifically, in our model, to enhance the target representation, we use a multi-head self-attention layer to extract useful information in source BERT representation $B_{S}$. We aim to generate target span by not only relying on target representation but also on source representation to simultaneously consider the $<$passage, question$>$ relations in both languages, which can be seen as a bilingual decoding process. Briefly, we regard target BERT representation $B_T$ as query and source BERT representation $B_S$ as key and value in multi-head attention mechanism. In original multi-head attention, we calculate a raw dot attention as follows. This will result in an attention matrix $A_{TS}$ that indicate raw relations between each source and target token. To combine the benefit of both inter-attention and self-attention, instead of using Equation 1, we propose a simple modification on multi-head attention mechanism, which is called Self-Adaptive Attention (SAA). First, we calculate self-attention of $B_T$ and $B_S$ and apply the softmax function, as shown in Equation 2 and 3. This is designed to use self-attention to filter the irrelevant part within each representation firstly, and inform the raw dot attention on paying more attention to the self-attended part, making the attention more precise and accurate. Then we use self-attention $A_T$ and $A_S$, inter-attention $A_{TS}$ to get self-attentive attention $\tilde{A}_{TS}$. We calculate dot product between ${A}_{ST}$ and $B_S$ to obtain attended representation $R^{\prime } \in \mathbb {R}^{L_T*h}$. After obtaining attended representation $R^{\prime }$, we use an additional fully connected layer with residual layer normalization which is similar to BERT implementation. Finally, we calculate weighted sum of $H_{T}$ to get final span prediction $P_{T}^{s}, P_{T}^{e}$ (superscript $s$ for start, $e$ for end). For example, the start position $P_{T}^{s}$ is calculated by the following equation. We calculate standard cross entropy loss for the start and end predictions in the target language. Dual BERT ::: Auxiliary Output In order to evaluate how translated sample behaves in the source language system, we also generate span prediction for source language using BERT representation $B_S$ directly without further calculation, resulting in the start and target prediction $P_{S}^{s}, P_{S}^{e}$ (similar to Equation 8). Moreover, we also calculate cross-entropy loss $\mathcal {L}_{aux}$ for translated sample (similar to Equation 9), where a $\lambda $ parameter is applied to this loss. Instead of setting $\lambda $ with heuristic value, in this paper, we propose a novel approach to better adjust $\lambda $ automatically. As the sample was generated by the machine translation system, there would be information loss during the translation process. Wrong or partially translated samples may harm the performance of reading comprehension system. To measure how the translated samples assemble the real target samples, we calculate cosine similarity between the ground truth span representation in source and target language (denoted as $\tilde{H}_{S}$ and $\tilde{H}_{T}$). When the ground truth span representation in the translated sample is similar to the real target samples, the $\lambda $ increase; otherwise, we only use target span loss as $\lambda $ may decrease to zero. The span representation is the concatenation of three parts: BERT representation of ground truth start $B^s \in \mathbb {R}^{h} $, ground truth end $B^e \in \mathbb {R}^{h}$, and self-attended span $B^{att} \in \mathbb {R}^{h}$, which considers both boundary information (start/end) and mixed representation of the whole ground truth span. We use BERT representation $B$ to get a self-attended span representation $B^{att}$ using a simple dot product with average pooling, to get a 2D-tensor. The overall loss for Dual BERT is composed by two parts: target span loss $\mathcal {L}_{T}$ and auxiliary span loss in source language $\mathcal {L}_{aux}$. Experiments ::: Experimental Setups We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29. Note that, since the test and challenge sets are preserved by CMRC 2018 official to ensure the integrity of the evaluation process, we submitted our best-performing systems to the organizers to get these scores. The resource in source language was chosen as SQuAD BIBREF4 training data. The settings of the proposed approaches are listed below in detail. [leftmargin=*] Tokenization: Following the official BERT implementation, we use WordPiece tokenizer BIBREF16 for English and character-level tokenizer for Chinese. BERT: We use pre-trained English BERT on SQuAD 1.1 BIBREF4 for initialization, denoted as SQ-$B_{en}$ (base) and SQ-$L_{en}$ (large) for back-translation approaches. For other conditions, we use multi-lingual BERT as default, denoted as $B_{mul}$ (and SQ-$B_{mul}$ for those were pre-trained on SQuAD). Translation: We use Google Neural Machine Translation (GNMT) system for translation. We evaluated GNMT system on NIST MT02/03/04/05/06/08 Chinese-English set and achieved an average BLEU score of 43.24, compared to previous best work (43.20) BIBREF17, yielding state-of-the-art performance. Optimization: Following original BERT implementation, we use Adam with weight decay optimizer BIBREF18 using an initial learning rate of 4e-5 and use cosine learning rate decay scheme instead of the original linear decay, which we found it beneficial for stabilizing results. The training batch size is set to 64, and each model is trained for 2 epochs, which roughly takes 1 hour. Implementation: We modified the TensorFlow BIBREF19 version run_squad.py provided by BERT. All models are trained on Cloud TPU v2 that has 64GB HBM. Experiments ::: Overall Results The overall results are shown in Table TABREF37. As we can see that, without using any alignment approach, the zero-shot results are quite lower regardless of using English BERT-base (#1) or BERT-large (#2). When we apply SimpleMatch (#3), we observe significant improvements demonstrating its effectiveness. The Answer Aligner (#4) could further improve the performance beyond SimpleMatch approach, demonstrating that the machine learning approach could dynamically adjust the span output by learning the semantic relationship between translated answer and target passage. Also, the Answer Verifier (#5) could further boost performance and surpass the multi-lingual BERT baseline (#7) that only use target training data, demonstrating that it is beneficial to adopt rich-resourced language to improve machine reading comprehension in other languages. When we do not use SQuAD pre-trained weights, the proposed Dual BERT (#8) yields significant improvements (all results are verified by p-test with $p<0.05$) over both Chinese BERT (#6) and multi-lingual BERT (#7) by a large margin. If we only train the BERT with SQuAD (#9), which is a zero-shot system, we can see that it achieves decent performance on two Chinese reading comprehension data. Moreover, we can also pursue further improvements by continue training (#10) with Chinese data starting from the system #9, or mixing Chinese data with SQuAD and training from initial multi-lingual BERT (#11). Under powerful SQuAD pre-trained baselines, Dual BERT (#12) still gives moderate and consistent improvements over Cascade Training (#10) and Mixed Training (#11) baselines and set new state-of-the-art performances on both datasets, demonstrating the effectiveness of using machine-translated sample to enhance the Chinese reading comprehension performance. Experiments ::: Results on Japanese and French SQuAD In this paper, we propose a simple but effective approach called SimpleMatch to align translated answer to original passage span. While one may argue that using neural machine translation attention to project source answer to original target passage span is ideal as used in BIBREF10. However, to extract attention value in neural machine translation system and apply it to extract the original passage span is bothersome and computationally ineffective. To demonstrate the effectiveness of using SimpleMatch instead of using NMT attention to extract original passage span in zero-shot condition, we applied SimpleMatch to Japanese and French SQuAD (304 samples for each) which is what exactly used in BIBREF10. The results are listed in Table TABREF40. From the results, we can see that, though our baseline (GNMT+BERT$_{L_{en}}$) is higher than previous work (Back-Translation BIBREF10), when using SimpleMatch to extract original passage span could obtain competitive of even larger improvements. In Japanese SQuAD, the F1 score improved by 9.6 in BIBREF10 using NMT attention, while we obtain larger improvement with 11.8 points demonstrating the effectiveness of the proposed method. BERT with pre-trained SQuAD weights yields the best performance among these systems, as it does not require the machine translation process and has unified text representations for different languages. Experiments ::: Ablation Studies In this section, we ablate important components in our model to explicitly demonstrate its effectiveness. The ablation results are depicted in Table TABREF42. As we can see that, removing SQuAD pre-trained weights (i.e., using randomly initialized BERT) hurts the performance most, suggesting that it is beneficial to use pre-trained weights though the source and the target language is different. Removing source BERT will degenerate to cascade training, and the results show that it also harms overall performance, demonstrating that it is beneficial to utilize translated sample for better characterizing the relations between $<$passage, question, answer$>$. The other modifications seem to also consistently decrease the performance to some extent, but not as salient as the data-related components (last two lines), indicating that data-related approaches are important in cross-lingual machine reading comprehension task. Discussion In our preliminary cross-lingual experiments, we adopt English as our source language data. However, one question remains unclear. Is it better to pre-train with larger data in a distant language (such as English, as oppose to Simplified Chinese), or with smaller data in closer language (such as Traditional Chinese)? To investigate the problem, we plot the multi-lingual BERT performance on the CMRC 2018 development data using different language and data size in the pre-training stage. The results are depicted in Figure FIGREF43, and we come to several observations. Firstly, when the size of pre-training data is under 25k (training data size of DRCD), we can see that there is no much difference whether we use Chinese or English data for pre-training, and even the English pre-trained models are better than Chinese pre-trained models in most of the times, which is not expected. We suspect that, by using multi-lingual BERT, the model tend to provide universal representations for the text and learn the language-independent semantic relations among the inputs which is ideal for cross-lingual tasks, thus the model is not that sensitive to the language in the pre-training stage. Also, as training data size of SQuAD is larger than DRCD, we could use more data for pre-training. When we add more SQuAD data ($>$25k) in the pre-training stage, the performance on the downstream task (CMRC 2018) continues to improve significantly. In this context, we conclude that, When the pre-training data is not abundant, there is no special preference on the selection of source (pre-training) language. If there are large-scale training data available for several languages, we should select the source language as the one that has the largest training data rather than its linguistic similarity to the target language. Furthermore, one could also take advantages of data in various languages, but not only in a bilingual environment, to further exploit knowledge from various sources, which is beyond the scope of this paper and we leave this for future work. Conclusion In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task. When there is no training data available for the target language, firstly, we provide several zero-shot approaches that were initially trained on English and transfer to other languages, along with three methods to improve the translated answer span by using unsupervised and supervised approaches. When there is training data available for the target language, we propose a novel model called Dual BERT to simultaneously model the $<$passage, question, answer$>$ in source and target languages using multi-lingual BERT. The proposed method takes advantage of the large-scale training data by rich-resource language (such as SQuAD) and learns the semantic relations between the passage and question in both source and target language. Experiments on two Chinese machine reading comprehension datasets indicate that the proposed model could give consistent and significant improvements over various state-of-the-art systems by a large margin and set baselines for future research on CLMRC task. Future studies on cross-lingual machine reading comprehension will focus on 1) how to utilize various types of English reading comprehension data; 2) cross-lingual machine reading comprehension without the translation process, etc. Acknowledgments We would like to thank all anonymous reviewers for their thorough reviewing and providing constructive comments to improve our paper. The first author was partially supported by the Google TensorFlow Research Cloud (TFRC) program for Cloud TPU access. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011, and 61772153.
Yes
afa94772fca7978f30973c43274ed826c40369eb
afa94772fca7978f30973c43274ed826c40369eb_0
Q: Are the contexts in a language different from the questions? Text: Introduction Machine Reading Comprehension (MRC) has been a popular task to test the reading ability of the machine, which requires to read text material and answer the questions based on it. Starting from cloze-style reading comprehension, various neural network approaches have been proposed and massive progresses have been made in creating large-scale datasets and neural models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Though various types of contributions had been made, most works are dealing with English reading comprehension. Reading comprehension in other than English has not been well-addressed mainly due to the lack of large-scale training data. To enrich the training data, there are two traditional approaches. Firstly, we can annotate data by human experts, which is ideal and high-quality, while it is time-consuming and rather expensive. One can also obtain large-scale automatically generated data BIBREF0, BIBREF1, BIBREF6, but the quality is far beyond the usable threshold. Another way is to exploit cross-lingual approaches to utilize the data in rich-resource language to implicitly learn the relations between $<$passage, question, answer$>$. In this paper, we propose the Cross-Lingual Machine Reading Comprehension (CLMRC) task that aims to help reading comprehension in low-resource languages. First, we present several back-translation approaches when there is no or partially available resources in the target language. Then we propose a novel model called Dual BERT to further improve the system performance when there is training data available in the target language. We first translate target language training data into English to form pseudo bilingual parallel data. Then we use multilingual BERT BIBREF7 to simultaneously model the $<$passage, question, answer$>$ in both languages, and fuse the representations of both to generate final predictions. Experimental results on two Chinese reading comprehension dataset CMRC 2018 BIBREF8 and DRCD BIBREF9 show that by utilizing English resources could substantially improve system performance and the proposed Dual BERT achieves state-of-the-art performances on both datasets, and even surpass human performance on some metrics. Also, we conduct experiments on the Japanese and French SQuAD BIBREF10 and achieves substantial improvements. Moreover, detailed ablations and analysis are carried out to demonstrate the effectiveness of exploiting knowledge from rich-resource language. To best of our knowledge, this is the first time that the cross-lingual approaches applied and evaluated on realistic reading comprehension data. The main contributions of our paper can be concluded as follows. [leftmargin=*] We present several back-translation based reading comprehension approaches and yield state-of-the-art performances on several reading comprehension datasets, including Chinese, Japanese, and French. We propose a model called Dual BERT to simultaneously model the $<$passage, question$>$ in both source and target language to enrich the text representations. Experimental results on two public Chinese reading comprehension datasets show that the proposed cross-lingual approaches yield significant improvements over various baseline systems and set new state-of-the-art performances. Related Works Machine Reading Comprehension (MRC) has been a trending research topic in recent years. Among various types of MRC tasks, span-extraction reading comprehension has been enormously popular (such as SQuAD BIBREF4), and we have seen a great progress on related neural network approaches BIBREF11, BIBREF12, BIBREF13, BIBREF3, BIBREF14, especially those were built on pre-trained language models, such as BERT BIBREF7. While massive achievements have been made by the community, reading comprehension in other than English has not been well-studied mainly due to the lack of large-scale training data. BIBREF10 proposed to use runtime machine translation for multilingual extractive reading comprehension. They first translate the data from the target language to English and then obtain an answer using an English reading comprehension model. Finally, they recover the corresponding answer in the original language using soft-alignment attention scores from the NMT model. However, though an interesting attempt has been made, the zero-shot results are quite low, and alignments between different languages, especially for those have different word orders, are significantly different. Also, they only evaluate on a rather small dataset (hundreds of samples) that was translated from SQuAD BIBREF4, which is not that realistic. To solve the issues above and better exploit large-scale rich-resourced reading comprehension data, in this paper, we propose several zero-shot approaches which yield state-of-the-art performances on Japanese and French SQuAD data. Moreover, we also propose a supervised approach for the condition that there are training samples available for the target language. To evaluate the effectiveness of our approach, we carried out experiments on two realistic public Chinese reading comprehension data: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. Experimental results demonstrate the effectiveness by modeling training samples in a bilingual environment. Back-Translation Approaches In this section, we illustrate back-translation approaches for cross-lingual machine reading comprehension, which is natural and easy to implement. Before introducing these approaches in detail, we will clarify crucial terminologies in this paper for better understanding. [leftmargin=*] Source Language: Rich-resourced and has sufficient large-scale training data that we aim to extract knowledge from. We use subscript S for variables in the source language. Target Language: Low-resourced and has only a few training data that we wish to optimize on. We use subscript T for variables in the target language. In this paper, we aim to improve the machine reading comprehension performance in Chinese (target language) by introducing English (source language) resources. The general idea of back-translation approaches is to translate $<$passage, question$>$ pair into the source language and generate an answer using a reading comprehension system in the source language. Finally, the generated answer is back-translated into the target language. In the following subsections, we will introduce several back-translation approaches for cross-lingual machine reading comprehension task. The architectures of the proposed back-translation approaches are depicted in Figure FIGREF5. Back-Translation Approaches ::: GNMT To build a simple cross-lingual machine reading comprehension system, it is straightforward to utilize translation system to bridge source and target language BIBREF10. Briefly, we first translate the target sample to the source language. Then we use a source reading comprehension system, such as BERT BIBREF7, to generate an answer in the source language. Finally, we use back-translation to get the answer in the target language. As we do not exploit any training data in the target language, we could regard this approach as a zero-shot cross-lingual baseline system. Specifically, we use Google Neural Machine Translation (GNMT) system for source-to-target and target-to-source translations. One may also use advanced and domain-specific neural machine translation system to achieve better translation performance, while we leave it for individuals, and this is beyond the scope of this paper. However, for span-extraction reading comprehension task, a major drawback of this approach is that the translated answer may not be the exact span in the target passage. To remedy this, we propose three simple approaches to improve the quality of the translated answer in the target language. Back-Translation Approaches ::: Simple Match We propose a simple approach to align the translated answer into extract span in the target passage. We calculate character-level text overlap (for Chinese) between translated answer $A_{trans}$ and arbitrary sliding window in target passage $\mathcal {P}_{T[i:j]}$. The length of sliding window ranges $len(A_{trans}) \pm \delta $, with a relax parameter $\delta $. Typically, the relax parameter $\delta \in [0,5]$ as the length between ground truth and translated answer does not differ much in length. In this way, we would calculate character-level F1-score of each candidate span $\mathcal {P}_{T[i:j]}$ and translated answer $A_{trans}$, and we could choose the best matching one accordingly. Using the proposed SimpleMatch could ensure the predicted answer is an exact span in target passage. As SimpleMatch does not use target training data either, it could also be a pipeline component in zero-shot settings. Back-Translation Approaches ::: Answer Aligner Though we could use unsupervised approaches for aligning answer, such as the proposed SimpleMatch, it stops at token-level and lacks semantic awareness between the translated answer and ground truth answer. In this paper, we also propose two supervised approaches for further improving the answer span when there is training data available in the target language. The first one is Answer Aligner, where we feed translated answer $\mathcal {A}_{trans}$ and target passage $\mathcal {P}_{T}$ into the BERT and outputs the ground truth answer span $\mathcal {A}_{T}$. The model will learn the semantic relations between them and generate improved span for the target language. Back-Translation Approaches ::: Answer Verifier In Answer Aligner, we did not exploit question information in target training data. One can also utilize question information to transform Answer Aligner into Answer Verifier, as we use complete $\langle \mathcal {P}_T, \mathcal {Q}_T, \mathcal {A}_T \rangle $ in the target language and additional translated answer $\mathcal {A}_{trans}$ to verify its correctness and generate improved span. Dual BERT One disadvantage of the back-translation approaches is that we have to recover the source answer into the target language. To remedy the issue, in this paper, we propose a novel model called Dual BERT to simultaneously model the training data in both source and target language to better exploit the relations among $<$passage, question, answer$>$. The model could be used when there is training data available for the target language, and we could better utilize source language data to enhance the target reading comprehension system. The overall neural architecture for Dual BERT is shown in Figure FIGREF11. Dual BERT ::: Dual Encoder Bidirectional Encoder Representation from Transformers (BERT) has shown marvelous performance in various NLP tasks, which substantially outperforms non-pretrained models by a large margin BIBREF7. In this paper, we use multi-lingual BERT for better encoding the text in both source and target language. Formally, given target passage $\mathcal {P}_{T}$ and question $\mathcal {Q}_{T}$, we organize the input $X_{T}$ for BERT as follows. [CLS] ${\mathcal {Q}_{T}}$ [SEP] ${\mathcal {P}_{T}}$ [SEP] Similarly, we can also obtain source training sample by translating target sample with GNMT, forming input $X_{S}$ for BERT. Then we use $X_{T}$ and $X_{S}$ to obtain deep contextualized representations through a shared multi-lingual BERT, forming $B_{T}\in \mathbb {R}^{L_T*h}, B_{S}\in \mathbb {R}^{L_S*h}$, where $L$ represents the length of input and $h$ is the hidden size (768 for multi-lingual BERT). Dual BERT ::: Bilingual Decoder Typically, in the reading comprehension task, attention mechanism is used to measure the relations between the passage and question. Moreover, as Transformers are fundamental components of BERT, multi-head self-attention layer BIBREF15 is used to extract useful information within the input sequence. Specifically, in our model, to enhance the target representation, we use a multi-head self-attention layer to extract useful information in source BERT representation $B_{S}$. We aim to generate target span by not only relying on target representation but also on source representation to simultaneously consider the $<$passage, question$>$ relations in both languages, which can be seen as a bilingual decoding process. Briefly, we regard target BERT representation $B_T$ as query and source BERT representation $B_S$ as key and value in multi-head attention mechanism. In original multi-head attention, we calculate a raw dot attention as follows. This will result in an attention matrix $A_{TS}$ that indicate raw relations between each source and target token. To combine the benefit of both inter-attention and self-attention, instead of using Equation 1, we propose a simple modification on multi-head attention mechanism, which is called Self-Adaptive Attention (SAA). First, we calculate self-attention of $B_T$ and $B_S$ and apply the softmax function, as shown in Equation 2 and 3. This is designed to use self-attention to filter the irrelevant part within each representation firstly, and inform the raw dot attention on paying more attention to the self-attended part, making the attention more precise and accurate. Then we use self-attention $A_T$ and $A_S$, inter-attention $A_{TS}$ to get self-attentive attention $\tilde{A}_{TS}$. We calculate dot product between ${A}_{ST}$ and $B_S$ to obtain attended representation $R^{\prime } \in \mathbb {R}^{L_T*h}$. After obtaining attended representation $R^{\prime }$, we use an additional fully connected layer with residual layer normalization which is similar to BERT implementation. Finally, we calculate weighted sum of $H_{T}$ to get final span prediction $P_{T}^{s}, P_{T}^{e}$ (superscript $s$ for start, $e$ for end). For example, the start position $P_{T}^{s}$ is calculated by the following equation. We calculate standard cross entropy loss for the start and end predictions in the target language. Dual BERT ::: Auxiliary Output In order to evaluate how translated sample behaves in the source language system, we also generate span prediction for source language using BERT representation $B_S$ directly without further calculation, resulting in the start and target prediction $P_{S}^{s}, P_{S}^{e}$ (similar to Equation 8). Moreover, we also calculate cross-entropy loss $\mathcal {L}_{aux}$ for translated sample (similar to Equation 9), where a $\lambda $ parameter is applied to this loss. Instead of setting $\lambda $ with heuristic value, in this paper, we propose a novel approach to better adjust $\lambda $ automatically. As the sample was generated by the machine translation system, there would be information loss during the translation process. Wrong or partially translated samples may harm the performance of reading comprehension system. To measure how the translated samples assemble the real target samples, we calculate cosine similarity between the ground truth span representation in source and target language (denoted as $\tilde{H}_{S}$ and $\tilde{H}_{T}$). When the ground truth span representation in the translated sample is similar to the real target samples, the $\lambda $ increase; otherwise, we only use target span loss as $\lambda $ may decrease to zero. The span representation is the concatenation of three parts: BERT representation of ground truth start $B^s \in \mathbb {R}^{h} $, ground truth end $B^e \in \mathbb {R}^{h}$, and self-attended span $B^{att} \in \mathbb {R}^{h}$, which considers both boundary information (start/end) and mixed representation of the whole ground truth span. We use BERT representation $B$ to get a self-attended span representation $B^{att}$ using a simple dot product with average pooling, to get a 2D-tensor. The overall loss for Dual BERT is composed by two parts: target span loss $\mathcal {L}_{T}$ and auxiliary span loss in source language $\mathcal {L}_{aux}$. Experiments ::: Experimental Setups We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and DRCD (traditional Chinese) BIBREF9. The statistics of the two datasets are listed in Table TABREF29. Note that, since the test and challenge sets are preserved by CMRC 2018 official to ensure the integrity of the evaluation process, we submitted our best-performing systems to the organizers to get these scores. The resource in source language was chosen as SQuAD BIBREF4 training data. The settings of the proposed approaches are listed below in detail. [leftmargin=*] Tokenization: Following the official BERT implementation, we use WordPiece tokenizer BIBREF16 for English and character-level tokenizer for Chinese. BERT: We use pre-trained English BERT on SQuAD 1.1 BIBREF4 for initialization, denoted as SQ-$B_{en}$ (base) and SQ-$L_{en}$ (large) for back-translation approaches. For other conditions, we use multi-lingual BERT as default, denoted as $B_{mul}$ (and SQ-$B_{mul}$ for those were pre-trained on SQuAD). Translation: We use Google Neural Machine Translation (GNMT) system for translation. We evaluated GNMT system on NIST MT02/03/04/05/06/08 Chinese-English set and achieved an average BLEU score of 43.24, compared to previous best work (43.20) BIBREF17, yielding state-of-the-art performance. Optimization: Following original BERT implementation, we use Adam with weight decay optimizer BIBREF18 using an initial learning rate of 4e-5 and use cosine learning rate decay scheme instead of the original linear decay, which we found it beneficial for stabilizing results. The training batch size is set to 64, and each model is trained for 2 epochs, which roughly takes 1 hour. Implementation: We modified the TensorFlow BIBREF19 version run_squad.py provided by BERT. All models are trained on Cloud TPU v2 that has 64GB HBM. Experiments ::: Overall Results The overall results are shown in Table TABREF37. As we can see that, without using any alignment approach, the zero-shot results are quite lower regardless of using English BERT-base (#1) or BERT-large (#2). When we apply SimpleMatch (#3), we observe significant improvements demonstrating its effectiveness. The Answer Aligner (#4) could further improve the performance beyond SimpleMatch approach, demonstrating that the machine learning approach could dynamically adjust the span output by learning the semantic relationship between translated answer and target passage. Also, the Answer Verifier (#5) could further boost performance and surpass the multi-lingual BERT baseline (#7) that only use target training data, demonstrating that it is beneficial to adopt rich-resourced language to improve machine reading comprehension in other languages. When we do not use SQuAD pre-trained weights, the proposed Dual BERT (#8) yields significant improvements (all results are verified by p-test with $p<0.05$) over both Chinese BERT (#6) and multi-lingual BERT (#7) by a large margin. If we only train the BERT with SQuAD (#9), which is a zero-shot system, we can see that it achieves decent performance on two Chinese reading comprehension data. Moreover, we can also pursue further improvements by continue training (#10) with Chinese data starting from the system #9, or mixing Chinese data with SQuAD and training from initial multi-lingual BERT (#11). Under powerful SQuAD pre-trained baselines, Dual BERT (#12) still gives moderate and consistent improvements over Cascade Training (#10) and Mixed Training (#11) baselines and set new state-of-the-art performances on both datasets, demonstrating the effectiveness of using machine-translated sample to enhance the Chinese reading comprehension performance. Experiments ::: Results on Japanese and French SQuAD In this paper, we propose a simple but effective approach called SimpleMatch to align translated answer to original passage span. While one may argue that using neural machine translation attention to project source answer to original target passage span is ideal as used in BIBREF10. However, to extract attention value in neural machine translation system and apply it to extract the original passage span is bothersome and computationally ineffective. To demonstrate the effectiveness of using SimpleMatch instead of using NMT attention to extract original passage span in zero-shot condition, we applied SimpleMatch to Japanese and French SQuAD (304 samples for each) which is what exactly used in BIBREF10. The results are listed in Table TABREF40. From the results, we can see that, though our baseline (GNMT+BERT$_{L_{en}}$) is higher than previous work (Back-Translation BIBREF10), when using SimpleMatch to extract original passage span could obtain competitive of even larger improvements. In Japanese SQuAD, the F1 score improved by 9.6 in BIBREF10 using NMT attention, while we obtain larger improvement with 11.8 points demonstrating the effectiveness of the proposed method. BERT with pre-trained SQuAD weights yields the best performance among these systems, as it does not require the machine translation process and has unified text representations for different languages. Experiments ::: Ablation Studies In this section, we ablate important components in our model to explicitly demonstrate its effectiveness. The ablation results are depicted in Table TABREF42. As we can see that, removing SQuAD pre-trained weights (i.e., using randomly initialized BERT) hurts the performance most, suggesting that it is beneficial to use pre-trained weights though the source and the target language is different. Removing source BERT will degenerate to cascade training, and the results show that it also harms overall performance, demonstrating that it is beneficial to utilize translated sample for better characterizing the relations between $<$passage, question, answer$>$. The other modifications seem to also consistently decrease the performance to some extent, but not as salient as the data-related components (last two lines), indicating that data-related approaches are important in cross-lingual machine reading comprehension task. Discussion In our preliminary cross-lingual experiments, we adopt English as our source language data. However, one question remains unclear. Is it better to pre-train with larger data in a distant language (such as English, as oppose to Simplified Chinese), or with smaller data in closer language (such as Traditional Chinese)? To investigate the problem, we plot the multi-lingual BERT performance on the CMRC 2018 development data using different language and data size in the pre-training stage. The results are depicted in Figure FIGREF43, and we come to several observations. Firstly, when the size of pre-training data is under 25k (training data size of DRCD), we can see that there is no much difference whether we use Chinese or English data for pre-training, and even the English pre-trained models are better than Chinese pre-trained models in most of the times, which is not expected. We suspect that, by using multi-lingual BERT, the model tend to provide universal representations for the text and learn the language-independent semantic relations among the inputs which is ideal for cross-lingual tasks, thus the model is not that sensitive to the language in the pre-training stage. Also, as training data size of SQuAD is larger than DRCD, we could use more data for pre-training. When we add more SQuAD data ($>$25k) in the pre-training stage, the performance on the downstream task (CMRC 2018) continues to improve significantly. In this context, we conclude that, When the pre-training data is not abundant, there is no special preference on the selection of source (pre-training) language. If there are large-scale training data available for several languages, we should select the source language as the one that has the largest training data rather than its linguistic similarity to the target language. Furthermore, one could also take advantages of data in various languages, but not only in a bilingual environment, to further exploit knowledge from various sources, which is beyond the scope of this paper and we leave this for future work. Conclusion In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task. When there is no training data available for the target language, firstly, we provide several zero-shot approaches that were initially trained on English and transfer to other languages, along with three methods to improve the translated answer span by using unsupervised and supervised approaches. When there is training data available for the target language, we propose a novel model called Dual BERT to simultaneously model the $<$passage, question, answer$>$ in source and target languages using multi-lingual BERT. The proposed method takes advantage of the large-scale training data by rich-resource language (such as SQuAD) and learns the semantic relations between the passage and question in both source and target language. Experiments on two Chinese machine reading comprehension datasets indicate that the proposed model could give consistent and significant improvements over various state-of-the-art systems by a large margin and set baselines for future research on CLMRC task. Future studies on cross-lingual machine reading comprehension will focus on 1) how to utilize various types of English reading comprehension data; 2) cross-lingual machine reading comprehension without the translation process, etc. Acknowledgments We would like to thank all anonymous reviewers for their thorough reviewing and providing constructive comments to improve our paper. The first author was partially supported by the Google TensorFlow Research Cloud (TFRC) program for Cloud TPU access. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011, and 61772153.
Unanswerable
6f2118a0c64d5d2f49eee004d35b956cb330a10e
6f2118a0c64d5d2f49eee004d35b956cb330a10e_0
Q: What datasets are used for training/testing models? Text: Introduction In a task-oriented dialogue system, the dialogue manager policy module predicts actions usually in terms of dialogue acts and domain specific slots. It is a crucial component that influences the efficiency (e.g., the conciseness and smoothness) of the communication between the user and the agent. Both supervised learning (SL) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 and reinforcement learning (RL) approaches BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9 have been adopted to learn policies. SL learns a policy to predict acts given the dialogue state. Recent work BIBREF10, BIBREF11 also used SL as pre-training for RL to mitigate the sample inefficiency of RL approaches and to reduce the number of interactions. Sequence2Sequence (Seq2Seq) BIBREF12 approaches have also been adopted in user simulators to produce user acts BIBREF13. These approaches typically assume that the agent can only produce one act per turn through classification. Generating only one act per turn significantly limits what an agent can do in a turn and leads to lengthy dialogues, making tracking of state and context throughout the dialogue harder. An example in Table TABREF3 shows how the agent can produce both an inform and a multiple_choice act, reducing the need for additional turns. The use of multiple actions has previously been used in interaction managers that keep track of the floor (who is speaking right now) BIBREF14, BIBREF15, BIBREF16, but the option of generating multiple acts simultaneously at each turn for dialogue policy has been largely ignored, and only explored in simulated scenarios without real data BIBREF17. This task can be cast as a multi-label classification problem (if the sequential dependency among the acts is ignored) or as a sequence generation one as shown in Table TABREF4. In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\textit {continue}, \textit {act}, \textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18. We compare this model with baseline classifiers and sequence generation models and show that it consistently outperforms them. Methodology The proposed policy network adopts an encoder-decoder architecture (Figure FIGREF5). The input to the encoder is the current-turn dialogue state, which follows BIBREF19's definition. It contains policy actions from the previous turn, user dialogue acts from the current turn, user requested slots, the user informed slots, the agent requested slots and agent proposed slots. We treat the dialogue state as a sequence and adopt a GRU BIBREF20 to encode it. The encoded dialogue state is a sequence of vectors $\mathbf {E} = (e_0, \ldots , e_l)$ and the last hidden state is $h^{E}$. The CAS decoder recurrently generates tuples at each step. It takes $h^{E}$ as initial hidden state $h_0$. At each decoding step, the input contains the previous (continue, act, slots) tuple $(c_{t-1},a_{t-1},s_{t-1})$. An additional vector $k$ containing the number of results from the knowledge base (KB) query and the current turn number is given as input. The output of the decoder at each step is a tuple $(c, a, s)$, where $c \in \lbrace \langle \text{continue} \rangle , \langle \text{stop} \rangle , \langle \text{pad} \rangle \rbrace $, $a \in A$ (one act from the act set), and $s \subset S$ (a subset from the slot set). Methodology ::: gCAS Cell As shown in Figure FIGREF7, the gated CAS cell contains three sequentially connected units for outputting continue, act, and slots respectively. The Continue unit maps the previous tuple $(c_{t-1}, a_{t-1}, s_{t-1})$ and the KB vector $k$ into $x_t^c$. The hidden state from the previous step $h_{t-1}$ and $x_t^c$ are inputs to a $\text{GRU}^c$ unit that produces output $g_t^c$ and hidden state $h_t^c$. Finally, $g_t^c$ is used to predict $c_t$ through a linear projection and a $\text{softmax}$. The Act unit maps the tuple $(c_t, a_{t-1}, s_{t-1})$ and the KB vector $k$ into $x_t^a$. The hidden state from the continue cell $h_t^c$ and $x_t^a$ are inputs to a $\text{GRU}^a$ unit that produces output $g_t^a$ and hidden state $h_t^a$. Finally, $g_t^a$ is used to predict $a_t$ through a linear projection and a $\text{softmax}$. The Slots unit maps the tuple $(c_t, a_t, s_{t-1})$ and the KB vector $k$ into $x_t^s$. The hidden state from the act cell $h_t^a$ and $x_t^s$ are inputs to a $\text{GRU}^s$ unit that produces output $g_t^s$ and hidden state $h_t^s$. Finally, $g_t^a$ is used to predict $s_t$ through a linear projection and a $\text{sigmoid}$. Let $z_t^i$ be the $i$-th slot's ground truth. The overall loss is the sum of the losses of the three units: $\mathcal {L} = \mathcal {L}^c + \mathcal {L}^a + \mathcal {L}^s$ Experiments The experiment dataset comes from Microsoft Research (MSR) . It contains three domains: movie, taxi, and restaurant. The total count of dialogues per domain and train/valid/test split is reported in Table TABREF11. At every turn both user and agent acts are annotated, we use only the agent side as targets in our experiment. The acts are ordered in the dataset (each output sentence aligns with one act). The size of the sets of acts, slots, and act-slot pairs are also listed in Table TABREF11. Table TABREF12 shows the count of turns with multiple act annotations, which amounts to 23% of the dataset. We use MSR's dialogue management code and knowledge base to obtain the state at each turn and use it as input to every model. Experiments ::: Evaluation Metrics We evaluate the performance at the act, frame and task completion level. For a frame to be correct, both the act and all the slots should match the ground truth. We report precision, recall, F$_1$ score of turn-level acts and frames. For task completion evaluation, Entity F$_1$ score and Success F$_1$ score BIBREF21 are reported. The Entity F$_1$ score, differently from the entity match rate in state tracking, compares the slots requested by the agent with the slots the user informed about and that were used to perform the KB query. We use it to measure agent performance in requesting information. The Success F$_1$ score compares the slots provided by the agent with the slots requested by the user. We use it to measure the agent performance in providing information. Critical slots and Non-critical slots: By `non-critical', we mean slots that the user informs the system about by providing their values and thus it is not critical for the system to provide them in the output. Table 1 shows an example, with the genre slot provided by the user and the system repeating it in its answer. Critical slots refers to slots that the system must provide like “moviename” in the Table 1 example. Although non-critical slots do not impact task completion directly, they may influence the output quality by enriching the dialogue state and helping users understand the system's utterance correctly. Furthermore, given the same dialog state, utterances offering non-critical slots or not offering them can both be present in the dataset, as they are optional. This makes the prediction of those slots more challenging for the system. To provide a more detailed analysis, we report the precision, recall, F$_1$ score of turn-level for all slots, critical slots and non-critical slots of the inform act. Experiments ::: Baseline We compare five methods on the multi-act task. Classification replicates the MSR challenge BIBREF19 policy network architecture: two fully connected layers. We replace the last activation from $\text{softmax}$ to $\text{sigmoid}$ in order to predict probabilities for each act-slot pair. It is equivalent to binary classification for each act-slot pair and the loss is the sum of the binary cross-entropy of all of them. Seq2Seq BIBREF12 encodes the dialogue state as a sequence, and decodes agent acts as a sequence with attention BIBREF22. Copy Seq2Seq BIBREF23 adds a copy mechanism to Seq2Seq, which allows copying words from the encoder input. CAS adopts a single GRU BIBREF20 for decoding and uses three different fully connected layers for mapping the output of the GRU to continue, act and slots. For each step in the sequence of CAS tuples, given the output of the GRU, continue, act and slot predictions are obtained by separate heads, each with one fully connected layer. The hidden state of the GRU and the predictions at the previous step are passed to the cell at the next step connecting them sequentially. gCAS uses our proposed recurrent cell which contains separate continue, act and slots unit that are sequentially connected. The classification architecture has two fully connected layers of size 128, and the remaining models have a hidden size of 64 and a teacher-forcing rate of 0.5. Seq2Seq and Copy Seq2Seq use a beam search with beam size 10 during inference. CAS and gCAS do not adopt a beam search since their inference steps are much less than Seq2Seq methods. All models use Adam optimizer BIBREF24 with a learning rate of 0.001. Experiments ::: Result and Error Analysis As shown in Table TABREF13, gCAS outperforms all other methods on Entity F$_1$ in all three domains. Compared to Seq2Seq, the performance advantage of gCAS in the taxi and restaurant domains is small, while it is more evident in the movie domain. The reason is that in the movie domain the proportion of turns with multiple acts is higher (52%), while in the other two domains it is lower (30%). gCAS also outperforms all other models in terms of Success F$_1$ in the movie and restaurant domain but is outperformed by the classification model in the taxi domain. The reason is that in the taxi domain, the agent usually informs the user at the last turn, while in all previous turns the agent usually requests information from the user. It is easy for the classification model to overfit this pattern. The advantage of gCAS in the restaurant domain is much more evident: the agent's inform act usually has multiple slots (see example 2 in Table TABREF15) and this makes classification and sequence generation harder, but gCAS multi-label slots decoder handles it easily. Table TABREF14 shows the turn-level acts and frame prediction performance. CAS and gCAS outperform all other models in acts prediction in terms of F$_1$ score. The main reason is that CAS and gCAS output a tuple at each recurrent step, which makes for shorter sequences that are easier to generate compared to the long sequences of Seq2Seq (example 2 in Table TABREF15). The classification method has a good precision score, but a lower recall score, suggesting it has problems making granular decisions (example 2 in Table TABREF15). At the frame level, gCAS still outperforms all other methods. The performance difference between CAS and gCAS on frames becomes much more evident, suggesting that gCAS is more capable of predicting slots that are consistent with the act. This finding is also consistent with their Entity F$_1$ and Success F$_1$ performance. However, gCAS's act-slot pair performance is far from perfect. The most common failure case is on non-critical slots (like `genre' in the example in Table TABREF4): gCAS does not predict them, while it predicts the critical ones (like `moviename' in the example in Table TABREF4). Table TABREF15 shows predictions of all methods from two emblematic examples. Example 1 is a frequent single-act multi-slots agent act. Example 2 is a complex multi-act example. The baseline classification method can predict frequent pairs in the dataset, but cannot predict any act in the complex example. The generated sequences of Copy Seq2Seq and Seq2Seq show that both models struggle in following the syntax. CAS cannot predict slots correctly even if the act is common in the dataset. gCAS returns a correct prediction for Example 1, but for Example 2 gCAS cannot predict `starttime', which is a non-critical slot. Tables TABREF16 and TABREF17 show the results of all slots, critical slots and non-critical slots under the inform act. gCAS performs better than the other methods on all slots in the movie and restaurant domains. The reason why classification performs the best here in the taxi domain is the same as the Success F$_1$. In the taxi domain, the agent usually informs the user at the last turn. The non-critical slots are also repeated frequently in the taxi domain, which makes their prediction easier. gCAS's performance is close to other methods on critical-slots. The reason is that the inform act is mostly the first act in multi-act and critical slots are usually frequent in the data. All methods can predict them well. In the movie and restaurant domains, the inform act usually appears during the dialogue and there are many optional non-critical slots that can appear (see Table TABREF11, movie and restaurant domains have more slots and pairs than the taxi domain). gCAS can better predict the non-critical slots than other methods. However, the overall performance on non-critical slots is much worse than critical slots since their appearances are optional and inconsistent in the data. Conclusion and Future Work In this paper, we introduced a multi-act dialogue policy model motivated by the need for a richer interaction between users and conversation agents. We studied classification and sequence generation methods for this task, and proposed a novel recurrent cell, gated CAS, which allows the decoder to output a tuple at each step. Experimental results showed that gCAS is the best performing model for multi-act prediction. The CAS decoder and the gCAS cell can also be used in a user simulator and gCAS can be applied in the encoder. A few directions for improvement have also been identified: 1) improving the performance on non-critical slots, 2) tuning the decoder with RL, 3) text generation from gCAS. We leave them as future work. Acknowledgments We would like to express our special thanks to Alexandros Papangelis and Gokhan Tur for their support and contribution. We also would like to thank Xiujun Li for his help on dataset preparation and Jane Hung for her valuable comments. Bing Liu is partially supported by the NSF grant IIS-1910424 and a research gift from Northrop Grumman.
Microsoft Research dataset containing movie, taxi and restaurant domains.
8a0a51382d186e8d92bf7e78277a1d48958758da
8a0a51382d186e8d92bf7e78277a1d48958758da_0
Q: How better is gCAS approach compared to other approaches? Text: Introduction In a task-oriented dialogue system, the dialogue manager policy module predicts actions usually in terms of dialogue acts and domain specific slots. It is a crucial component that influences the efficiency (e.g., the conciseness and smoothness) of the communication between the user and the agent. Both supervised learning (SL) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 and reinforcement learning (RL) approaches BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9 have been adopted to learn policies. SL learns a policy to predict acts given the dialogue state. Recent work BIBREF10, BIBREF11 also used SL as pre-training for RL to mitigate the sample inefficiency of RL approaches and to reduce the number of interactions. Sequence2Sequence (Seq2Seq) BIBREF12 approaches have also been adopted in user simulators to produce user acts BIBREF13. These approaches typically assume that the agent can only produce one act per turn through classification. Generating only one act per turn significantly limits what an agent can do in a turn and leads to lengthy dialogues, making tracking of state and context throughout the dialogue harder. An example in Table TABREF3 shows how the agent can produce both an inform and a multiple_choice act, reducing the need for additional turns. The use of multiple actions has previously been used in interaction managers that keep track of the floor (who is speaking right now) BIBREF14, BIBREF15, BIBREF16, but the option of generating multiple acts simultaneously at each turn for dialogue policy has been largely ignored, and only explored in simulated scenarios without real data BIBREF17. This task can be cast as a multi-label classification problem (if the sequential dependency among the acts is ignored) or as a sequence generation one as shown in Table TABREF4. In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\textit {continue}, \textit {act}, \textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18. We compare this model with baseline classifiers and sequence generation models and show that it consistently outperforms them. Methodology The proposed policy network adopts an encoder-decoder architecture (Figure FIGREF5). The input to the encoder is the current-turn dialogue state, which follows BIBREF19's definition. It contains policy actions from the previous turn, user dialogue acts from the current turn, user requested slots, the user informed slots, the agent requested slots and agent proposed slots. We treat the dialogue state as a sequence and adopt a GRU BIBREF20 to encode it. The encoded dialogue state is a sequence of vectors $\mathbf {E} = (e_0, \ldots , e_l)$ and the last hidden state is $h^{E}$. The CAS decoder recurrently generates tuples at each step. It takes $h^{E}$ as initial hidden state $h_0$. At each decoding step, the input contains the previous (continue, act, slots) tuple $(c_{t-1},a_{t-1},s_{t-1})$. An additional vector $k$ containing the number of results from the knowledge base (KB) query and the current turn number is given as input. The output of the decoder at each step is a tuple $(c, a, s)$, where $c \in \lbrace \langle \text{continue} \rangle , \langle \text{stop} \rangle , \langle \text{pad} \rangle \rbrace $, $a \in A$ (one act from the act set), and $s \subset S$ (a subset from the slot set). Methodology ::: gCAS Cell As shown in Figure FIGREF7, the gated CAS cell contains three sequentially connected units for outputting continue, act, and slots respectively. The Continue unit maps the previous tuple $(c_{t-1}, a_{t-1}, s_{t-1})$ and the KB vector $k$ into $x_t^c$. The hidden state from the previous step $h_{t-1}$ and $x_t^c$ are inputs to a $\text{GRU}^c$ unit that produces output $g_t^c$ and hidden state $h_t^c$. Finally, $g_t^c$ is used to predict $c_t$ through a linear projection and a $\text{softmax}$. The Act unit maps the tuple $(c_t, a_{t-1}, s_{t-1})$ and the KB vector $k$ into $x_t^a$. The hidden state from the continue cell $h_t^c$ and $x_t^a$ are inputs to a $\text{GRU}^a$ unit that produces output $g_t^a$ and hidden state $h_t^a$. Finally, $g_t^a$ is used to predict $a_t$ through a linear projection and a $\text{softmax}$. The Slots unit maps the tuple $(c_t, a_t, s_{t-1})$ and the KB vector $k$ into $x_t^s$. The hidden state from the act cell $h_t^a$ and $x_t^s$ are inputs to a $\text{GRU}^s$ unit that produces output $g_t^s$ and hidden state $h_t^s$. Finally, $g_t^a$ is used to predict $s_t$ through a linear projection and a $\text{sigmoid}$. Let $z_t^i$ be the $i$-th slot's ground truth. The overall loss is the sum of the losses of the three units: $\mathcal {L} = \mathcal {L}^c + \mathcal {L}^a + \mathcal {L}^s$ Experiments The experiment dataset comes from Microsoft Research (MSR) . It contains three domains: movie, taxi, and restaurant. The total count of dialogues per domain and train/valid/test split is reported in Table TABREF11. At every turn both user and agent acts are annotated, we use only the agent side as targets in our experiment. The acts are ordered in the dataset (each output sentence aligns with one act). The size of the sets of acts, slots, and act-slot pairs are also listed in Table TABREF11. Table TABREF12 shows the count of turns with multiple act annotations, which amounts to 23% of the dataset. We use MSR's dialogue management code and knowledge base to obtain the state at each turn and use it as input to every model. Experiments ::: Evaluation Metrics We evaluate the performance at the act, frame and task completion level. For a frame to be correct, both the act and all the slots should match the ground truth. We report precision, recall, F$_1$ score of turn-level acts and frames. For task completion evaluation, Entity F$_1$ score and Success F$_1$ score BIBREF21 are reported. The Entity F$_1$ score, differently from the entity match rate in state tracking, compares the slots requested by the agent with the slots the user informed about and that were used to perform the KB query. We use it to measure agent performance in requesting information. The Success F$_1$ score compares the slots provided by the agent with the slots requested by the user. We use it to measure the agent performance in providing information. Critical slots and Non-critical slots: By `non-critical', we mean slots that the user informs the system about by providing their values and thus it is not critical for the system to provide them in the output. Table 1 shows an example, with the genre slot provided by the user and the system repeating it in its answer. Critical slots refers to slots that the system must provide like “moviename” in the Table 1 example. Although non-critical slots do not impact task completion directly, they may influence the output quality by enriching the dialogue state and helping users understand the system's utterance correctly. Furthermore, given the same dialog state, utterances offering non-critical slots or not offering them can both be present in the dataset, as they are optional. This makes the prediction of those slots more challenging for the system. To provide a more detailed analysis, we report the precision, recall, F$_1$ score of turn-level for all slots, critical slots and non-critical slots of the inform act. Experiments ::: Baseline We compare five methods on the multi-act task. Classification replicates the MSR challenge BIBREF19 policy network architecture: two fully connected layers. We replace the last activation from $\text{softmax}$ to $\text{sigmoid}$ in order to predict probabilities for each act-slot pair. It is equivalent to binary classification for each act-slot pair and the loss is the sum of the binary cross-entropy of all of them. Seq2Seq BIBREF12 encodes the dialogue state as a sequence, and decodes agent acts as a sequence with attention BIBREF22. Copy Seq2Seq BIBREF23 adds a copy mechanism to Seq2Seq, which allows copying words from the encoder input. CAS adopts a single GRU BIBREF20 for decoding and uses three different fully connected layers for mapping the output of the GRU to continue, act and slots. For each step in the sequence of CAS tuples, given the output of the GRU, continue, act and slot predictions are obtained by separate heads, each with one fully connected layer. The hidden state of the GRU and the predictions at the previous step are passed to the cell at the next step connecting them sequentially. gCAS uses our proposed recurrent cell which contains separate continue, act and slots unit that are sequentially connected. The classification architecture has two fully connected layers of size 128, and the remaining models have a hidden size of 64 and a teacher-forcing rate of 0.5. Seq2Seq and Copy Seq2Seq use a beam search with beam size 10 during inference. CAS and gCAS do not adopt a beam search since their inference steps are much less than Seq2Seq methods. All models use Adam optimizer BIBREF24 with a learning rate of 0.001. Experiments ::: Result and Error Analysis As shown in Table TABREF13, gCAS outperforms all other methods on Entity F$_1$ in all three domains. Compared to Seq2Seq, the performance advantage of gCAS in the taxi and restaurant domains is small, while it is more evident in the movie domain. The reason is that in the movie domain the proportion of turns with multiple acts is higher (52%), while in the other two domains it is lower (30%). gCAS also outperforms all other models in terms of Success F$_1$ in the movie and restaurant domain but is outperformed by the classification model in the taxi domain. The reason is that in the taxi domain, the agent usually informs the user at the last turn, while in all previous turns the agent usually requests information from the user. It is easy for the classification model to overfit this pattern. The advantage of gCAS in the restaurant domain is much more evident: the agent's inform act usually has multiple slots (see example 2 in Table TABREF15) and this makes classification and sequence generation harder, but gCAS multi-label slots decoder handles it easily. Table TABREF14 shows the turn-level acts and frame prediction performance. CAS and gCAS outperform all other models in acts prediction in terms of F$_1$ score. The main reason is that CAS and gCAS output a tuple at each recurrent step, which makes for shorter sequences that are easier to generate compared to the long sequences of Seq2Seq (example 2 in Table TABREF15). The classification method has a good precision score, but a lower recall score, suggesting it has problems making granular decisions (example 2 in Table TABREF15). At the frame level, gCAS still outperforms all other methods. The performance difference between CAS and gCAS on frames becomes much more evident, suggesting that gCAS is more capable of predicting slots that are consistent with the act. This finding is also consistent with their Entity F$_1$ and Success F$_1$ performance. However, gCAS's act-slot pair performance is far from perfect. The most common failure case is on non-critical slots (like `genre' in the example in Table TABREF4): gCAS does not predict them, while it predicts the critical ones (like `moviename' in the example in Table TABREF4). Table TABREF15 shows predictions of all methods from two emblematic examples. Example 1 is a frequent single-act multi-slots agent act. Example 2 is a complex multi-act example. The baseline classification method can predict frequent pairs in the dataset, but cannot predict any act in the complex example. The generated sequences of Copy Seq2Seq and Seq2Seq show that both models struggle in following the syntax. CAS cannot predict slots correctly even if the act is common in the dataset. gCAS returns a correct prediction for Example 1, but for Example 2 gCAS cannot predict `starttime', which is a non-critical slot. Tables TABREF16 and TABREF17 show the results of all slots, critical slots and non-critical slots under the inform act. gCAS performs better than the other methods on all slots in the movie and restaurant domains. The reason why classification performs the best here in the taxi domain is the same as the Success F$_1$. In the taxi domain, the agent usually informs the user at the last turn. The non-critical slots are also repeated frequently in the taxi domain, which makes their prediction easier. gCAS's performance is close to other methods on critical-slots. The reason is that the inform act is mostly the first act in multi-act and critical slots are usually frequent in the data. All methods can predict them well. In the movie and restaurant domains, the inform act usually appears during the dialogue and there are many optional non-critical slots that can appear (see Table TABREF11, movie and restaurant domains have more slots and pairs than the taxi domain). gCAS can better predict the non-critical slots than other methods. However, the overall performance on non-critical slots is much worse than critical slots since their appearances are optional and inconsistent in the data. Conclusion and Future Work In this paper, we introduced a multi-act dialogue policy model motivated by the need for a richer interaction between users and conversation agents. We studied classification and sequence generation methods for this task, and proposed a novel recurrent cell, gated CAS, which allows the decoder to output a tuple at each step. Experimental results showed that gCAS is the best performing model for multi-act prediction. The CAS decoder and the gCAS cell can also be used in a user simulator and gCAS can be applied in the encoder. A few directions for improvement have also been identified: 1) improving the performance on non-critical slots, 2) tuning the decoder with RL, 3) text generation from gCAS. We leave them as future work. Acknowledgments We would like to express our special thanks to Alexandros Papangelis and Gokhan Tur for their support and contribution. We also would like to thank Xiujun Li for his help on dataset preparation and Jane Hung for her valuable comments. Bing Liu is partially supported by the NSF grant IIS-1910424 and a research gift from Northrop Grumman.
For entity F1 in the movie, taxi and restaurant domain it results in scores of 50.86, 64, and 60.35. For success, it results it outperforms in the movie and restaurant domain with scores of 77.95 and 71.52
b8dea4a98b4da4ef1b9c98a211210e31d6630cf3
b8dea4a98b4da4ef1b9c98a211210e31d6630cf3_0
Q: What is specific to gCAS cell? Text: Introduction In a task-oriented dialogue system, the dialogue manager policy module predicts actions usually in terms of dialogue acts and domain specific slots. It is a crucial component that influences the efficiency (e.g., the conciseness and smoothness) of the communication between the user and the agent. Both supervised learning (SL) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 and reinforcement learning (RL) approaches BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9 have been adopted to learn policies. SL learns a policy to predict acts given the dialogue state. Recent work BIBREF10, BIBREF11 also used SL as pre-training for RL to mitigate the sample inefficiency of RL approaches and to reduce the number of interactions. Sequence2Sequence (Seq2Seq) BIBREF12 approaches have also been adopted in user simulators to produce user acts BIBREF13. These approaches typically assume that the agent can only produce one act per turn through classification. Generating only one act per turn significantly limits what an agent can do in a turn and leads to lengthy dialogues, making tracking of state and context throughout the dialogue harder. An example in Table TABREF3 shows how the agent can produce both an inform and a multiple_choice act, reducing the need for additional turns. The use of multiple actions has previously been used in interaction managers that keep track of the floor (who is speaking right now) BIBREF14, BIBREF15, BIBREF16, but the option of generating multiple acts simultaneously at each turn for dialogue policy has been largely ignored, and only explored in simulated scenarios without real data BIBREF17. This task can be cast as a multi-label classification problem (if the sequential dependency among the acts is ignored) or as a sequence generation one as shown in Table TABREF4. In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\textit {continue}, \textit {act}, \textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18. We compare this model with baseline classifiers and sequence generation models and show that it consistently outperforms them. Methodology The proposed policy network adopts an encoder-decoder architecture (Figure FIGREF5). The input to the encoder is the current-turn dialogue state, which follows BIBREF19's definition. It contains policy actions from the previous turn, user dialogue acts from the current turn, user requested slots, the user informed slots, the agent requested slots and agent proposed slots. We treat the dialogue state as a sequence and adopt a GRU BIBREF20 to encode it. The encoded dialogue state is a sequence of vectors $\mathbf {E} = (e_0, \ldots , e_l)$ and the last hidden state is $h^{E}$. The CAS decoder recurrently generates tuples at each step. It takes $h^{E}$ as initial hidden state $h_0$. At each decoding step, the input contains the previous (continue, act, slots) tuple $(c_{t-1},a_{t-1},s_{t-1})$. An additional vector $k$ containing the number of results from the knowledge base (KB) query and the current turn number is given as input. The output of the decoder at each step is a tuple $(c, a, s)$, where $c \in \lbrace \langle \text{continue} \rangle , \langle \text{stop} \rangle , \langle \text{pad} \rangle \rbrace $, $a \in A$ (one act from the act set), and $s \subset S$ (a subset from the slot set). Methodology ::: gCAS Cell As shown in Figure FIGREF7, the gated CAS cell contains three sequentially connected units for outputting continue, act, and slots respectively. The Continue unit maps the previous tuple $(c_{t-1}, a_{t-1}, s_{t-1})$ and the KB vector $k$ into $x_t^c$. The hidden state from the previous step $h_{t-1}$ and $x_t^c$ are inputs to a $\text{GRU}^c$ unit that produces output $g_t^c$ and hidden state $h_t^c$. Finally, $g_t^c$ is used to predict $c_t$ through a linear projection and a $\text{softmax}$. The Act unit maps the tuple $(c_t, a_{t-1}, s_{t-1})$ and the KB vector $k$ into $x_t^a$. The hidden state from the continue cell $h_t^c$ and $x_t^a$ are inputs to a $\text{GRU}^a$ unit that produces output $g_t^a$ and hidden state $h_t^a$. Finally, $g_t^a$ is used to predict $a_t$ through a linear projection and a $\text{softmax}$. The Slots unit maps the tuple $(c_t, a_t, s_{t-1})$ and the KB vector $k$ into $x_t^s$. The hidden state from the act cell $h_t^a$ and $x_t^s$ are inputs to a $\text{GRU}^s$ unit that produces output $g_t^s$ and hidden state $h_t^s$. Finally, $g_t^a$ is used to predict $s_t$ through a linear projection and a $\text{sigmoid}$. Let $z_t^i$ be the $i$-th slot's ground truth. The overall loss is the sum of the losses of the three units: $\mathcal {L} = \mathcal {L}^c + \mathcal {L}^a + \mathcal {L}^s$ Experiments The experiment dataset comes from Microsoft Research (MSR) . It contains three domains: movie, taxi, and restaurant. The total count of dialogues per domain and train/valid/test split is reported in Table TABREF11. At every turn both user and agent acts are annotated, we use only the agent side as targets in our experiment. The acts are ordered in the dataset (each output sentence aligns with one act). The size of the sets of acts, slots, and act-slot pairs are also listed in Table TABREF11. Table TABREF12 shows the count of turns with multiple act annotations, which amounts to 23% of the dataset. We use MSR's dialogue management code and knowledge base to obtain the state at each turn and use it as input to every model. Experiments ::: Evaluation Metrics We evaluate the performance at the act, frame and task completion level. For a frame to be correct, both the act and all the slots should match the ground truth. We report precision, recall, F$_1$ score of turn-level acts and frames. For task completion evaluation, Entity F$_1$ score and Success F$_1$ score BIBREF21 are reported. The Entity F$_1$ score, differently from the entity match rate in state tracking, compares the slots requested by the agent with the slots the user informed about and that were used to perform the KB query. We use it to measure agent performance in requesting information. The Success F$_1$ score compares the slots provided by the agent with the slots requested by the user. We use it to measure the agent performance in providing information. Critical slots and Non-critical slots: By `non-critical', we mean slots that the user informs the system about by providing their values and thus it is not critical for the system to provide them in the output. Table 1 shows an example, with the genre slot provided by the user and the system repeating it in its answer. Critical slots refers to slots that the system must provide like “moviename” in the Table 1 example. Although non-critical slots do not impact task completion directly, they may influence the output quality by enriching the dialogue state and helping users understand the system's utterance correctly. Furthermore, given the same dialog state, utterances offering non-critical slots or not offering them can both be present in the dataset, as they are optional. This makes the prediction of those slots more challenging for the system. To provide a more detailed analysis, we report the precision, recall, F$_1$ score of turn-level for all slots, critical slots and non-critical slots of the inform act. Experiments ::: Baseline We compare five methods on the multi-act task. Classification replicates the MSR challenge BIBREF19 policy network architecture: two fully connected layers. We replace the last activation from $\text{softmax}$ to $\text{sigmoid}$ in order to predict probabilities for each act-slot pair. It is equivalent to binary classification for each act-slot pair and the loss is the sum of the binary cross-entropy of all of them. Seq2Seq BIBREF12 encodes the dialogue state as a sequence, and decodes agent acts as a sequence with attention BIBREF22. Copy Seq2Seq BIBREF23 adds a copy mechanism to Seq2Seq, which allows copying words from the encoder input. CAS adopts a single GRU BIBREF20 for decoding and uses three different fully connected layers for mapping the output of the GRU to continue, act and slots. For each step in the sequence of CAS tuples, given the output of the GRU, continue, act and slot predictions are obtained by separate heads, each with one fully connected layer. The hidden state of the GRU and the predictions at the previous step are passed to the cell at the next step connecting them sequentially. gCAS uses our proposed recurrent cell which contains separate continue, act and slots unit that are sequentially connected. The classification architecture has two fully connected layers of size 128, and the remaining models have a hidden size of 64 and a teacher-forcing rate of 0.5. Seq2Seq and Copy Seq2Seq use a beam search with beam size 10 during inference. CAS and gCAS do not adopt a beam search since their inference steps are much less than Seq2Seq methods. All models use Adam optimizer BIBREF24 with a learning rate of 0.001. Experiments ::: Result and Error Analysis As shown in Table TABREF13, gCAS outperforms all other methods on Entity F$_1$ in all three domains. Compared to Seq2Seq, the performance advantage of gCAS in the taxi and restaurant domains is small, while it is more evident in the movie domain. The reason is that in the movie domain the proportion of turns with multiple acts is higher (52%), while in the other two domains it is lower (30%). gCAS also outperforms all other models in terms of Success F$_1$ in the movie and restaurant domain but is outperformed by the classification model in the taxi domain. The reason is that in the taxi domain, the agent usually informs the user at the last turn, while in all previous turns the agent usually requests information from the user. It is easy for the classification model to overfit this pattern. The advantage of gCAS in the restaurant domain is much more evident: the agent's inform act usually has multiple slots (see example 2 in Table TABREF15) and this makes classification and sequence generation harder, but gCAS multi-label slots decoder handles it easily. Table TABREF14 shows the turn-level acts and frame prediction performance. CAS and gCAS outperform all other models in acts prediction in terms of F$_1$ score. The main reason is that CAS and gCAS output a tuple at each recurrent step, which makes for shorter sequences that are easier to generate compared to the long sequences of Seq2Seq (example 2 in Table TABREF15). The classification method has a good precision score, but a lower recall score, suggesting it has problems making granular decisions (example 2 in Table TABREF15). At the frame level, gCAS still outperforms all other methods. The performance difference between CAS and gCAS on frames becomes much more evident, suggesting that gCAS is more capable of predicting slots that are consistent with the act. This finding is also consistent with their Entity F$_1$ and Success F$_1$ performance. However, gCAS's act-slot pair performance is far from perfect. The most common failure case is on non-critical slots (like `genre' in the example in Table TABREF4): gCAS does not predict them, while it predicts the critical ones (like `moviename' in the example in Table TABREF4). Table TABREF15 shows predictions of all methods from two emblematic examples. Example 1 is a frequent single-act multi-slots agent act. Example 2 is a complex multi-act example. The baseline classification method can predict frequent pairs in the dataset, but cannot predict any act in the complex example. The generated sequences of Copy Seq2Seq and Seq2Seq show that both models struggle in following the syntax. CAS cannot predict slots correctly even if the act is common in the dataset. gCAS returns a correct prediction for Example 1, but for Example 2 gCAS cannot predict `starttime', which is a non-critical slot. Tables TABREF16 and TABREF17 show the results of all slots, critical slots and non-critical slots under the inform act. gCAS performs better than the other methods on all slots in the movie and restaurant domains. The reason why classification performs the best here in the taxi domain is the same as the Success F$_1$. In the taxi domain, the agent usually informs the user at the last turn. The non-critical slots are also repeated frequently in the taxi domain, which makes their prediction easier. gCAS's performance is close to other methods on critical-slots. The reason is that the inform act is mostly the first act in multi-act and critical slots are usually frequent in the data. All methods can predict them well. In the movie and restaurant domains, the inform act usually appears during the dialogue and there are many optional non-critical slots that can appear (see Table TABREF11, movie and restaurant domains have more slots and pairs than the taxi domain). gCAS can better predict the non-critical slots than other methods. However, the overall performance on non-critical slots is much worse than critical slots since their appearances are optional and inconsistent in the data. Conclusion and Future Work In this paper, we introduced a multi-act dialogue policy model motivated by the need for a richer interaction between users and conversation agents. We studied classification and sequence generation methods for this task, and proposed a novel recurrent cell, gated CAS, which allows the decoder to output a tuple at each step. Experimental results showed that gCAS is the best performing model for multi-act prediction. The CAS decoder and the gCAS cell can also be used in a user simulator and gCAS can be applied in the encoder. A few directions for improvement have also been identified: 1) improving the performance on non-critical slots, 2) tuning the decoder with RL, 3) text generation from gCAS. We leave them as future work. Acknowledgments We would like to express our special thanks to Alexandros Papangelis and Gokhan Tur for their support and contribution. We also would like to thank Xiujun Li for his help on dataset preparation and Jane Hung for her valuable comments. Bing Liu is partially supported by the NSF grant IIS-1910424 and a research gift from Northrop Grumman.
It has three sequentially connected units to output continue, act and slots generating multi-acts in a doble recurrent manner.
4146e1d8f79902c0bc034695998b724515b6ac81
4146e1d8f79902c0bc034695998b724515b6ac81_0
Q: What dataset do they evaluate their model on? Text: Introduction The question of how human beings resolve pronouns has long been of interest to both linguistics and natural language processing (NLP) communities, for the reason that pronoun itself has weak semantic meaning BIBREF0 and brings challenges in natural language understanding. To explore solutions for that question, pronoun coreference resolution BIBREF1 was proposed. As an important yet vital sub-task of the general coreference resolution task, pronoun coreference resolution is to find the correct reference for a given pronominal anaphor in the context and has been shown to be crucial for a series of downstream tasks BIBREF2 , including machine translation BIBREF3 , summarization BIBREF4 , information extraction BIBREF5 , and dialog systems BIBREF6 . Conventionally, people design rules BIBREF1 , BIBREF7 , BIBREF8 or use features BIBREF9 , BIBREF10 , BIBREF11 to resolve the pronoun coreferences. These methods heavily rely on the coverage and quality of the manually defined rules and features. Until recently, end-to-end solution BIBREF12 was proposed towards solving the general coreference problem, where deep learning models were used to better capture contextual information. However, training such models on annotated corpora can be biased and normally does not consider external knowledge. Despite the great efforts made in this area in the past few decades BIBREF1 , BIBREF8 , BIBREF9 , BIBREF13 , pronoun coreference resolution remains challenging. The reason behind is that the correct resolution of pronouns can be influenced by many factors BIBREF0 ; many resolution decisions require reasoning upon different contextual and external knowledge BIBREF14 , which is also proved in other NLP tasks BIBREF15 , BIBREF16 , BIBREF17 . Figure 1 demonstrates such requirement with three examples, where Example A depends on the plurality knowledge that `them' refers to plural noun phrases; Example B illustrates the gender requirement of pronouns where `she' can only refer to a female person (girl); Example C requires a more general type of knowledge that `cats can climb trees but a dog normally does not'. All of these knowledge are difficult to be learned from training data. Considering the importance of both contextual information and external human knowledge, how to jointly leverage them becomes an important question for pronoun coreference resolution. In this paper, we propose a two-layer model to address the question while solving two challenges of incorporating external knowledge into deep models for pronoun coreference resolution, where the challenges include: first, different cases have their knowledge preference, i.e., some knowledge is exclusively important for certain cases, which requires the model to be flexible in selecting appropriate knowledge per case; second, the availability of knowledge resources is limited and such resources normally contain noise, which requires the model to be robust in learning from them. Consequently, in our model, the first layer predicts the relations between candidate noun phrases and the target pronoun based on the contextual information learned by neural networks. The second layer compares the candidates pair-wisely, in which we propose a knowledge attention module to focus on appropriate knowledge based on the given context. Moreover, a softmax pruning is placed in between the two layers to select high confident candidates. The architecture ensures the model being able to leverage both context and external knowledge. Especially, compared with conventional approaches that simply treat external knowledge as rules or features, our model is not only more flexible and effective but also interpretable as it reflects which knowledge source has the higher weight in order to make the decision. Experiments are conducted on a widely used evaluation dataset, where the results prove that the proposed model outperforms all baseline models by a great margin. Above all, to summarize, this paper makes the following contributions: The Task Following the conventional setting BIBREF1 , the task of pronoun coreference resolution is defined as: for a pronoun $p$ and a candidate noun phrase set ${\mathcal {N}}$ , the goal is to identify the correct non-pronominal references set ${\mathcal {C}}$ . the objective is to maximize the following objective function: $${\mathcal {J}}= \frac{\sum _{c \in {\mathcal {C}}}{e^{F(c, p)}}}{\sum _{n \in {\mathcal {N}}}e^{F(n, p)}},$$ (Eq. 8) where $c$ is the correct reference and $n$ the candidate noun phrase. $F(\cdot )$ refers to the overall coreference scoring function for each $n$ regarding $p$ . Following BIBREF8 , all non-pronominal noun phrases in the recent three sentences of the pronoun $p$ are selected to form $N$ . Particularly in our setting, we want to leverage both the local contextual information and external knowledge in this task, thus for each $n$ and $p$ , $F(.)$ is decomposed into two components: $$F(n, p) = F_c(n, p) + F_k(n, p),$$ (Eq. 10) where $F_c(n, p)$ is the scoring function that predicts the relation between $n$ and $p$ based on the contextual information; $F_k(n, p)$ is the scoring function that predicts the relation between $n$ and $p$ based on the external knowledge. There could be multiple ways to compute $F_c$ and $F_k$ , where a solution proposed in this paper is described as follows. The Model The architecture of our model is shown in Figure 2 , where we use two layers to incorporate contextual information and external knowledge. Specifically, the first layer takes the representations of different $n$ and the $p$ as input and predict the relationship between each pair of $n$ and $p$ , so as to compute $F_c$ . The second layer leverages the external knowledge to compute $F_k$ , which consists of pair-wise knowledge score $f_k$ among all candidate $n$ . To enhance the efficiency of the model, a softmax pruning module is applied to select high confident candidates into the second layer. The details of the aforementioned components are described in the following subsections. Encoding Contextual Information Before $F_c$ is computed, the contextual information is encoded through a span representation (SR) module in the first layer of the model. Following BIBREF12 , we adopt the standard bidirectional LSTM (biLSTM) BIBREF18 and the attention mechanism BIBREF19 to generate the span representation, as shown in Figure 3 . Given that the initial word representations in a span $n_i$ are ${\bf x}_1,...,{\bf x}_T$ , we denote their representations ${\bf x}^*_1,...,{\bf x}^*_T$ after encoded by the biLSTM. Then we obtain the inner-span attention by $$a_t = \frac{e^{\alpha _t}}{\sum _{k=1}^{T}e^{\alpha _k}},$$ (Eq. 14) where $\alpha _t$ is computed via a standard feed-forward neural network $\alpha _t$ = $NN_\alpha ({\bf x}^*_t)$ . Thus, we have the weighted embedding of each span $\hat{x}_i$ through $$\hat{{\bf x}}_i = \sum _{k=1}^{T}a_k \cdot {\bf x}_k.$$ (Eq. 16) Afterwards, we concatenate the starting ( ${\bf x}^*_{start}$ ) and ending ( ${\bf x}^*_{end}$ ) embedding of each span, as well as its weighted embedding ( $\hat{{\bf x}}_i$ ) and the length feature ( $\phi (i)$ ) to form its final representation $e$ : $${\bf e}_i = [{\bf x}^*_{start},{\bf x}^*_{end},\hat{{\bf x}}_i,\phi (i)].$$ (Eq. 17) Once the span representation of $n \in {\mathcal {N}}$ and $p$ are obtained, we compute $F_c$ for each $n$ with a standard feed-forward neural network: $$F_c(n, p) = NN_c([{\bf e}_n, {\bf e}_p, {\bf e}_n \odot {\bf e}_p]),$$ (Eq. 18) where $\odot $ is the element-wise multiplication. Processing External Knowledge In the second layer of our model, external knowledge is leveraged to evaluate all candidate $n$ so as to give them reasonable $F_k$ scores. In doing so, each candidate is represented as a group of features from different knowledge sources, e.g., `the cat' can be represented as a singular noun, unknown gender creature, and a regular subject of the predicate verb `climb'. For each candidate, we conduct a series of pair-wise comparisons between it and all other ones to result in its $F_k$ score. An attention mechanism is proposed to perform the comparison and selectively use the knowledge features. Consider there exists noise in external knowledge, especially when it is automatically generated, such attention mechanism ensures that, for each candidate, reliable and useful knowledge is utilized rather than ineffective ones. The details of the knowledge attention module and the overall scoring are described as follows. Knowledge Attention Figure 4 demonstrates the structure of the knowledge attention module, where there are two components: (1) weighting: assigning weights to different knowledge features regarding their importance in the comparison; (2) scoring: valuing a candidate against another one based on their features from different knowledge sources. Assuming that there are $m$ knowledge sources input to our model, each candidate can be represented by $m$ different features, which are encoded as embeddings. Therefore, two candidates $n$ and $n^\prime $ regarding $p$ have their knowledge feature embeddings ${\bf k}_{n,p}^1, {\bf k}_{n,p}^2, ..., {\bf k}_{n,p}^m$ and ${\bf k}_{n^\prime ,p}^1,{\bf k}_{n^\prime ,p}^2,...,{\bf k}_{n^\prime ,p}^m$ , respectively. The weighting component receives all features ${\bf k}$ for $n$ and $n^\prime $ , and the span representations $m$0 and $m$1 as input, where $m$2 and $m$3 help selecting appropriate knowledge based on the context. As a result, for a candidate pair ( $m$4 , $m$5 ) and a knowledge source $m$6 , its knowledge attention score is computed via $$\beta _i(n, n^\prime , p) = NN_{ka}([{\bf o}_{n,p}^i, {\bf o}_{n^\prime ,p}^i, {\bf o}_{n,p}^i \odot {\bf o}_{n^\prime ,p}^i]),$$ (Eq. 21) where $ {\bf o}_{n,p}^i= [{\bf e}_n, {\bf k}_{n,p}^i]$ and ${\bf o}_{n^\prime ,p}^i = [{\bf e}_{n^\prime }, {\bf k}_{n^\prime ,p}^i]$ are the concatenation of span representation and external knowledge embedding for candidate $n$ and $n^\prime $ respectively. The weight for features from different knowledge sources is thus computed via $$w_i = \frac{e^{\beta _i}}{\sum _{j=1}^{m}e^{\beta _j}}.$$ (Eq. 22) Similar to the weighting component, for each feature $i$ , we compute its score $f_k^i(n, n^\prime , p)$ for $n$ against $n^\prime $ in the scoring component through $$f_k^i(n, n^\prime , p) = NN_{ks}([{\bf k}_{n,p}^i, {\bf k}_{n^\prime ,p}^i, {\bf k}_{n,p}^i \odot {\bf k}_{n^\prime ,p}^i]).$$ (Eq. 23) where it is worth noting that we exclude ${\bf e}$ in this component for the reason that, in practice, the dimension of ${\bf e}$ is normally much higher than ${\bf k}$ . As a result, it could dominate the computation if ${\bf e}$ and ${\bf k}$ is concatenated. Once the weights and scores are obtained, we have a weighted knowledge score for $n$ against $n^\prime $ : $$f_k(n, n^\prime , p) = \sum _{i=1}^{m}w_i \cdot f_k^i(n, n^\prime , p).$$ (Eq. 25) Overall Knowledge Score After all pairs of $n$ and $n^\prime $ are processed by the attention module, the overall knowledge score for $n$ is computed through the averaged $f_k(n, n^\prime , p)$ over all $n^\prime $ : $$F_k(n, p) = \frac{\sum _{n^\prime \in {\mathcal {N}}_o} f_k(n, n^\prime , p)}{|{\mathcal {N}}_o|},$$ (Eq. 26) where ${\mathcal {N}}_o = {\mathcal {N}}- n$ for each $n$ . Softmax Pruning Normally, there could be many noun phrases that serve as the candidates for the target pronoun. One potential obstacle in the pair-wise comparison of candidate noun phrases in our model is the squared complexity $O(|{\mathcal {N}}|^2)$ with respect to the size of ${\mathcal {N}}$ . To filter out low confident candidates so as to make the model more efficient, we use a softmax-pruning module between the two layers in our model to select candidates for the next step. The module takes $F_c$ as input for each $n$ , uses a softmax computation: $$\hat{F}_c(n, p) = \frac{e^{F_c(n, p)}}{\sum _{n_i \in {\mathcal {N}}}e^{F_c(n_i, p)}}.$$ (Eq. 28) where candidates with higher $\hat{F}_c$ are kept, based on a threshold $t$ predefined as the pruning standard. Therefore, if candidates have similar $F_c$ scores, the module allow more of them to proceed to the second layer. Compared with other conventional pruning methods BIBREF12 , BIBREF20 that generally keep a fixed number of candidates, our pruning strategy is more efficient and flexible. Data The CoNLL-2012 shared task BIBREF21 corpus is used as the evaluation dataset, which is selected from the Ontonotes 5.0. Following conventional approaches BIBREF9 , BIBREF11 , for each pronoun in the document, we consider candidate $n$ from the previous two sentences and the current sentence. For pronouns, we consider two types of them following BIBREF9 , i.e., third personal pronoun (she, her, he, him, them, they, it) and possessive pronoun (his, hers, its, their, theirs). Table 1 reports the number of the two type pronouns and the overall statistics for the experimental dataset. According to our selection range of candidate $n$ , on average, each pronoun has 4.6 candidates and 1.3 correct references. Knowledge Types In this study, we use two types of knowledge in our experiments. The first type is linguistic features, i.e., plurality and animacy & gender. We employ the Stanford parser, which generates plurality, animacy, and gender markups for all the noun phrases, to annotate our data. Specifically, the plurality feature denotes each $n$ and $p$ to be singular or plural. For each candidate $n$ , if its plurality status is the same as the target pronoun, we label it 1, otherwise 0. The animacy & gender (AG) feature denotes whether a $n$ or $p$ is a living object, and being male, female, or neutral if it is alive. For each candidate $n$ , if its AG feature matches the target pronoun's, we label it 1, otherwise 0. The second type is the selectional preference (SP) knowledge. For this knowledge, we create a knowledge base by counting how many times a predicate-argument tuple appears in a corpus and use the resulted number to represent the preference strength. Specifically, we use the English Wikipedia as the base corpus for such counting. Then we parse the entire corpus through the Stanford parser and record all dependency edges in the format of (predicate, argument, relation, number), where predicate is the governor and argument the dependent in the original parsed dependency edge. Later for sentences in the training and test data, we firstly parse each sentence and find out the dependency edge linking $p$ and its corresponding predicate. Then for each candidate $n$ in a sentence, we check the previously created SP knowledge base and find out how many times it appears as the argument of different predicates with the same dependency relation (i.e., nsubj and dobj). The resulted frequency is grouped into the following buckets [1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+] and we use the bucket id as the final SP knowledge. Thus in the previous example: The dog is chasing the cat but it climbs the tree. Its parsing result indicates that `it' is the subject of the verb `climb'. Then for `the dog', `the cat', and `the tree', we check their associations with `climb' in the knowledge base and group them in the buckets to form the SP knowledge features. Baselines Several baselines are compared in this work. The first two are conventional unsupervised ones: [leftmargin=*] Recent Candidate, which simply selects the most recent noun phrase that appears in front of the target pronoun. Deterministic model BIBREF22 , which proposes one multi-pass seive model with human designed rules for the coreference resolution task. Besides the unsupervised models, we also compare with three representative supervised ones: [leftmargin=*] Statistical model, proposed by BIBREF23 , uses human-designed entity-level features between clusters and mentions for coreference resolution. Deep-RL model, proposed by BIBREF24 , a reinforcement learning method to directly optimize the coreference matrix instead of the traditional loss function. End2end is the current state-of-the-art coreference model BIBREF20 , which performs in an end-to-end manner and leverages both the contextual information and a pre-trained language model BIBREF25 . Note that the Deterministic, Statistical, and Deep-RL models are included in the Stanford CoreNLP toolkit, and experiments are conducted with their provided code. For End2end, we use their released code and replace its mention detection component with gold mentions for the fair comparison. To clearly show the effectiveness of the proposed model, we also present a variation of our model as an extra baseline to illustrate the effect of different knowledge incorporation manner: [leftmargin=*] Feature Concatenation, a simplified version of the complete model that removes the second knowledge processing layer, but directly treats all external knowledge embeddings as features and concatenates them to span representations. Implementation Following previous work BIBREF20 , we use the concatenation of the 300d GloVe embeddings BIBREF26 and the ELMo BIBREF25 embeddings as the initial word representations. Out-of-vocabulary words are initialized with zero vectors. Hyper-parameters are set as follows. The hidden state of the LSTM module is set to 200, and all the feed-forward networks in our model have two 150-dimension hidden layers. The default pruning threshold $t$ for softmax pruning is set to $10^{-7}$ . All linguistic features (plurality and AG) and external knowledge (SP) are encoded as 20-dimension embeddings. For model training, we use cross-entropy as the loss function and Adam BIBREF27 as the optimizer. All the aforementioned hyper-parameters are initialized randomly, and we apply dropout rate 0.2 to all hidden layers in the model. Our model treats a candidate as the correct reference if its predicted overall score $F(n,p)$ is larger than 0. The model training is performed with up to 100 epochs, and the best one is selected based on its performance on the development set. Experimental Results Table 2 compares the performance of our model with all baselines. Overall, our model performs the best with respect to all evaluation metrics. Several findings are also observed from the results. First, manually defined knowledge and features are not enough to cover rich contextual information. Deep learning models (e.g., End2end and our proposed models), which leverage text representations for context, outperform other approaches by a great margin, especially on the recall. Second, external knowledge is highly helpful in this task, which is supported by that our model outperforms the End2end model significantly. Moreover, the comparison between the two variants of our models is also interesting, where the final two-layer model outperforms the Feature Concatenation model. It proves that simply treating external knowledge as the feature, even though they are from the same sources, is not as effective as learning them in a joint framework. The reason behind this result is mainly from the noise in the knowledge source, e.g., parsing error, incorrectly identified relations, etc. For example, the plurality of 17% noun phrases are wrongly labeled in the test data. As a comparison, our knowledge attention might contribute to alleviate such noise when incorporating all knowledge sources. Effect of Different Knowledge To illustrate the importance of different knowledge sources and the knowledge attention mechanism, we ablate various components of our model and report the corresponding F1 scores on the test data. The results are shown in Table 3 , which clearly show the necessity of the knowledge. Interestingly, AG contributes the most among all knowledge types, which indicates that potentially more cases in the evaluation dataset demand on the AG knowledge than others. More importantly, the results also prove the effectiveness of the knowledge attention module, which contributes to the performance gap between our model and the Feature Concatenation one. Effect of Different Pruning Thresholds We try different thresholds $t$ for the softmax pruning in selecting reliable candidates. The effects of different thresholds on reducing candidates and overall performance are shown in Figure 5 and 6 respectively. Along with the increase of $t$ , both the max and the average number of pruned candidates drop quickly, so that the space complexity of the model can be reduced accordingly. Particularly, there are as much as 80% candidates can be filtered out when $t = 10^{-1}$ . Meanwhile, when referring to Figure 6 , it is observed that the model performs stable with the decreasing of candidate numbers. Not surprisingly, the precision rises when reducing candidate numbers, yet the recall drops dramatically, eventually results in the drop of F1. With the above observations, the reason we set $t = 10^{-7}$ as the default threshold is straightforward: on this value, one-third candidates are pruned with almost no influence on the model performance in terms of precision, recall, and the F1 score. Case Study To further demonstrate the effectiveness of incorporating knowledge into pronoun coreference resolution, two examples are provided for detailed analysis. The prediction results of the End2end model and our complete model are shown in Table 4 . There are different challenges in both examples. In Example A, `Jesus', `man', and `my son' are all similar (male) noun phrases matching the target pronoun `He'. The End2end model predicts all of them to be correct references because their context provides limited help in distinguishing them. In Example B, the distance between `an accident' and the pronoun `it' is too far. As a result, the `None' result from the End2end model indicates that the contextual information is not enough to make the decision. As a comparison, in our model, integrating external knowledge can help to solve such challenges, e.g., for Example A, SP knowledge helps when Plurality and AG cannot distinguish all candidates. To clearly illustrate how our model leverages the external knowledge, we visualize the knowledge attention of the correct reference against other candidates via heatmaps in Figure 7 . Two interesting observations are drawn from the visualization. First, given two candidates, if they are significantly different in one feature, our model tends to pay more attention to that feature. Take AG as an example, in Example A, the AG features of all candidates consistently match the pronoun `he' (all male/neutral). Thus the comparison between `my son' and all candidates pay no attention to the AG feature. While in Example B, the target pronoun `it' cannot describe human, thus 'father' and `friend' are 0 on the AG feature while `hospital' and `accident' are 1. As a result, the attention module emphasizes AG more than other knowledge types. Second, The importance of SP is clearly shown in these examples. In example A, Plurality and AG features cannot help, the attention module weights higher on SP because `son' appears 100 times as the argument of the parsed predicate `child' in the SP knowledge base, while other candidates appear much less at that position. In example B, as mentioned above, once AG helps filtering 'hospital' and 'accident', SP plays an important role in distinguishing them because `accident' appears 26 times in the SP knowledge base as the argument of the `fault' from the results of the parser, while `hospital' never appears at that position. Related Work Coreference resolution is a core task for natural language understanding, where it detects mention span and identifies coreference relations among them. As demonstrated in BIBREF12 , mention detection and coreference prediction are the two major focuses of the task. Different from the general coreference task, pronoun coreference resolution has its unique challenge since the semantics of pronouns are often not as clear as normal noun phrases, in general, how to leverage the context and external knowledge to resolve the coreference for pronouns becomes its focus BIBREF1 , BIBREF14 , BIBREF28 . In previous work, external knowledge including manually defined rules BIBREF1 , BIBREF9 , such as number/gender requirement of different pronouns, and world knowledge BIBREF14 , such as selectional preference BIBREF29 , BIBREF30 and eventuality knowledge BIBREF31 , have been proved to be helpful for pronoun coreference resolution. Recently, with the development of deep learning, BIBREF12 proposed an end-to-end model that learns contextual information with an LSTM module and proved that such knowledge is helpful for coreference resolution when the context is properly encoded. The aforementioned two types of knowledge have their own advantages: the contextual information covers diverse text expressions that are difficult to be predefined while the external knowledge is usually more precisely constructed and able to provide extra information beyond the training data. Different from previous work, we explore the possibility of joining the two types of knowledge for pronoun coreference resolution rather than use only one of them. To the best of our knowledge, this is the first attempt that uses deep learning model to incorporate contextual information and external knowledge for pronoun coreference resolution. Conclusion In this paper, we proposed a two-layer model for pronoun coreference resolution, where the first layer encodes contextual information and the second layer leverages external knowledge. Particularly, a knowledge attention mechanism is proposed to selectively leverage features from different knowledge sources. As an enhancement to existing methods, the proposed model combines the advantage of conventional feature-based models and deep learning models, so that context and external knowledge can be synchronously and effectively used for this task. Experimental results and case studies demonstrate the superiority of the proposed model to state-of-the-art baselines. Since the proposed model adopted an extensible structure, one possible future work is to explore the best way to enhance it with more complicated knowledge resources such as knowledge graphs. Acknowledgements This paper was partially supported by the Early Career Scheme (ECS, No.26206717) from Research Grants Council in Hong Kong. In addition, Hongming Zhang has been supported by the Hong Kong Ph.D. Fellowship and the Tencent Rhino-Bird Elite Training Program. We also thank the anonymous reviewers for their valuable comments and suggestions that help improving the quality of this paper.
CoNLL-2012 shared task BIBREF21 corpus
42394c54a950bae8cebecda9de68ee78de69dc0d
42394c54a950bae8cebecda9de68ee78de69dc0d_0
Q: What is the source of external knowledge? Text: Introduction The question of how human beings resolve pronouns has long been of interest to both linguistics and natural language processing (NLP) communities, for the reason that pronoun itself has weak semantic meaning BIBREF0 and brings challenges in natural language understanding. To explore solutions for that question, pronoun coreference resolution BIBREF1 was proposed. As an important yet vital sub-task of the general coreference resolution task, pronoun coreference resolution is to find the correct reference for a given pronominal anaphor in the context and has been shown to be crucial for a series of downstream tasks BIBREF2 , including machine translation BIBREF3 , summarization BIBREF4 , information extraction BIBREF5 , and dialog systems BIBREF6 . Conventionally, people design rules BIBREF1 , BIBREF7 , BIBREF8 or use features BIBREF9 , BIBREF10 , BIBREF11 to resolve the pronoun coreferences. These methods heavily rely on the coverage and quality of the manually defined rules and features. Until recently, end-to-end solution BIBREF12 was proposed towards solving the general coreference problem, where deep learning models were used to better capture contextual information. However, training such models on annotated corpora can be biased and normally does not consider external knowledge. Despite the great efforts made in this area in the past few decades BIBREF1 , BIBREF8 , BIBREF9 , BIBREF13 , pronoun coreference resolution remains challenging. The reason behind is that the correct resolution of pronouns can be influenced by many factors BIBREF0 ; many resolution decisions require reasoning upon different contextual and external knowledge BIBREF14 , which is also proved in other NLP tasks BIBREF15 , BIBREF16 , BIBREF17 . Figure 1 demonstrates such requirement with three examples, where Example A depends on the plurality knowledge that `them' refers to plural noun phrases; Example B illustrates the gender requirement of pronouns where `she' can only refer to a female person (girl); Example C requires a more general type of knowledge that `cats can climb trees but a dog normally does not'. All of these knowledge are difficult to be learned from training data. Considering the importance of both contextual information and external human knowledge, how to jointly leverage them becomes an important question for pronoun coreference resolution. In this paper, we propose a two-layer model to address the question while solving two challenges of incorporating external knowledge into deep models for pronoun coreference resolution, where the challenges include: first, different cases have their knowledge preference, i.e., some knowledge is exclusively important for certain cases, which requires the model to be flexible in selecting appropriate knowledge per case; second, the availability of knowledge resources is limited and such resources normally contain noise, which requires the model to be robust in learning from them. Consequently, in our model, the first layer predicts the relations between candidate noun phrases and the target pronoun based on the contextual information learned by neural networks. The second layer compares the candidates pair-wisely, in which we propose a knowledge attention module to focus on appropriate knowledge based on the given context. Moreover, a softmax pruning is placed in between the two layers to select high confident candidates. The architecture ensures the model being able to leverage both context and external knowledge. Especially, compared with conventional approaches that simply treat external knowledge as rules or features, our model is not only more flexible and effective but also interpretable as it reflects which knowledge source has the higher weight in order to make the decision. Experiments are conducted on a widely used evaluation dataset, where the results prove that the proposed model outperforms all baseline models by a great margin. Above all, to summarize, this paper makes the following contributions: The Task Following the conventional setting BIBREF1 , the task of pronoun coreference resolution is defined as: for a pronoun $p$ and a candidate noun phrase set ${\mathcal {N}}$ , the goal is to identify the correct non-pronominal references set ${\mathcal {C}}$ . the objective is to maximize the following objective function: $${\mathcal {J}}= \frac{\sum _{c \in {\mathcal {C}}}{e^{F(c, p)}}}{\sum _{n \in {\mathcal {N}}}e^{F(n, p)}},$$ (Eq. 8) where $c$ is the correct reference and $n$ the candidate noun phrase. $F(\cdot )$ refers to the overall coreference scoring function for each $n$ regarding $p$ . Following BIBREF8 , all non-pronominal noun phrases in the recent three sentences of the pronoun $p$ are selected to form $N$ . Particularly in our setting, we want to leverage both the local contextual information and external knowledge in this task, thus for each $n$ and $p$ , $F(.)$ is decomposed into two components: $$F(n, p) = F_c(n, p) + F_k(n, p),$$ (Eq. 10) where $F_c(n, p)$ is the scoring function that predicts the relation between $n$ and $p$ based on the contextual information; $F_k(n, p)$ is the scoring function that predicts the relation between $n$ and $p$ based on the external knowledge. There could be multiple ways to compute $F_c$ and $F_k$ , where a solution proposed in this paper is described as follows. The Model The architecture of our model is shown in Figure 2 , where we use two layers to incorporate contextual information and external knowledge. Specifically, the first layer takes the representations of different $n$ and the $p$ as input and predict the relationship between each pair of $n$ and $p$ , so as to compute $F_c$ . The second layer leverages the external knowledge to compute $F_k$ , which consists of pair-wise knowledge score $f_k$ among all candidate $n$ . To enhance the efficiency of the model, a softmax pruning module is applied to select high confident candidates into the second layer. The details of the aforementioned components are described in the following subsections. Encoding Contextual Information Before $F_c$ is computed, the contextual information is encoded through a span representation (SR) module in the first layer of the model. Following BIBREF12 , we adopt the standard bidirectional LSTM (biLSTM) BIBREF18 and the attention mechanism BIBREF19 to generate the span representation, as shown in Figure 3 . Given that the initial word representations in a span $n_i$ are ${\bf x}_1,...,{\bf x}_T$ , we denote their representations ${\bf x}^*_1,...,{\bf x}^*_T$ after encoded by the biLSTM. Then we obtain the inner-span attention by $$a_t = \frac{e^{\alpha _t}}{\sum _{k=1}^{T}e^{\alpha _k}},$$ (Eq. 14) where $\alpha _t$ is computed via a standard feed-forward neural network $\alpha _t$ = $NN_\alpha ({\bf x}^*_t)$ . Thus, we have the weighted embedding of each span $\hat{x}_i$ through $$\hat{{\bf x}}_i = \sum _{k=1}^{T}a_k \cdot {\bf x}_k.$$ (Eq. 16) Afterwards, we concatenate the starting ( ${\bf x}^*_{start}$ ) and ending ( ${\bf x}^*_{end}$ ) embedding of each span, as well as its weighted embedding ( $\hat{{\bf x}}_i$ ) and the length feature ( $\phi (i)$ ) to form its final representation $e$ : $${\bf e}_i = [{\bf x}^*_{start},{\bf x}^*_{end},\hat{{\bf x}}_i,\phi (i)].$$ (Eq. 17) Once the span representation of $n \in {\mathcal {N}}$ and $p$ are obtained, we compute $F_c$ for each $n$ with a standard feed-forward neural network: $$F_c(n, p) = NN_c([{\bf e}_n, {\bf e}_p, {\bf e}_n \odot {\bf e}_p]),$$ (Eq. 18) where $\odot $ is the element-wise multiplication. Processing External Knowledge In the second layer of our model, external knowledge is leveraged to evaluate all candidate $n$ so as to give them reasonable $F_k$ scores. In doing so, each candidate is represented as a group of features from different knowledge sources, e.g., `the cat' can be represented as a singular noun, unknown gender creature, and a regular subject of the predicate verb `climb'. For each candidate, we conduct a series of pair-wise comparisons between it and all other ones to result in its $F_k$ score. An attention mechanism is proposed to perform the comparison and selectively use the knowledge features. Consider there exists noise in external knowledge, especially when it is automatically generated, such attention mechanism ensures that, for each candidate, reliable and useful knowledge is utilized rather than ineffective ones. The details of the knowledge attention module and the overall scoring are described as follows. Knowledge Attention Figure 4 demonstrates the structure of the knowledge attention module, where there are two components: (1) weighting: assigning weights to different knowledge features regarding their importance in the comparison; (2) scoring: valuing a candidate against another one based on their features from different knowledge sources. Assuming that there are $m$ knowledge sources input to our model, each candidate can be represented by $m$ different features, which are encoded as embeddings. Therefore, two candidates $n$ and $n^\prime $ regarding $p$ have their knowledge feature embeddings ${\bf k}_{n,p}^1, {\bf k}_{n,p}^2, ..., {\bf k}_{n,p}^m$ and ${\bf k}_{n^\prime ,p}^1,{\bf k}_{n^\prime ,p}^2,...,{\bf k}_{n^\prime ,p}^m$ , respectively. The weighting component receives all features ${\bf k}$ for $n$ and $n^\prime $ , and the span representations $m$0 and $m$1 as input, where $m$2 and $m$3 help selecting appropriate knowledge based on the context. As a result, for a candidate pair ( $m$4 , $m$5 ) and a knowledge source $m$6 , its knowledge attention score is computed via $$\beta _i(n, n^\prime , p) = NN_{ka}([{\bf o}_{n,p}^i, {\bf o}_{n^\prime ,p}^i, {\bf o}_{n,p}^i \odot {\bf o}_{n^\prime ,p}^i]),$$ (Eq. 21) where $ {\bf o}_{n,p}^i= [{\bf e}_n, {\bf k}_{n,p}^i]$ and ${\bf o}_{n^\prime ,p}^i = [{\bf e}_{n^\prime }, {\bf k}_{n^\prime ,p}^i]$ are the concatenation of span representation and external knowledge embedding for candidate $n$ and $n^\prime $ respectively. The weight for features from different knowledge sources is thus computed via $$w_i = \frac{e^{\beta _i}}{\sum _{j=1}^{m}e^{\beta _j}}.$$ (Eq. 22) Similar to the weighting component, for each feature $i$ , we compute its score $f_k^i(n, n^\prime , p)$ for $n$ against $n^\prime $ in the scoring component through $$f_k^i(n, n^\prime , p) = NN_{ks}([{\bf k}_{n,p}^i, {\bf k}_{n^\prime ,p}^i, {\bf k}_{n,p}^i \odot {\bf k}_{n^\prime ,p}^i]).$$ (Eq. 23) where it is worth noting that we exclude ${\bf e}$ in this component for the reason that, in practice, the dimension of ${\bf e}$ is normally much higher than ${\bf k}$ . As a result, it could dominate the computation if ${\bf e}$ and ${\bf k}$ is concatenated. Once the weights and scores are obtained, we have a weighted knowledge score for $n$ against $n^\prime $ : $$f_k(n, n^\prime , p) = \sum _{i=1}^{m}w_i \cdot f_k^i(n, n^\prime , p).$$ (Eq. 25) Overall Knowledge Score After all pairs of $n$ and $n^\prime $ are processed by the attention module, the overall knowledge score for $n$ is computed through the averaged $f_k(n, n^\prime , p)$ over all $n^\prime $ : $$F_k(n, p) = \frac{\sum _{n^\prime \in {\mathcal {N}}_o} f_k(n, n^\prime , p)}{|{\mathcal {N}}_o|},$$ (Eq. 26) where ${\mathcal {N}}_o = {\mathcal {N}}- n$ for each $n$ . Softmax Pruning Normally, there could be many noun phrases that serve as the candidates for the target pronoun. One potential obstacle in the pair-wise comparison of candidate noun phrases in our model is the squared complexity $O(|{\mathcal {N}}|^2)$ with respect to the size of ${\mathcal {N}}$ . To filter out low confident candidates so as to make the model more efficient, we use a softmax-pruning module between the two layers in our model to select candidates for the next step. The module takes $F_c$ as input for each $n$ , uses a softmax computation: $$\hat{F}_c(n, p) = \frac{e^{F_c(n, p)}}{\sum _{n_i \in {\mathcal {N}}}e^{F_c(n_i, p)}}.$$ (Eq. 28) where candidates with higher $\hat{F}_c$ are kept, based on a threshold $t$ predefined as the pruning standard. Therefore, if candidates have similar $F_c$ scores, the module allow more of them to proceed to the second layer. Compared with other conventional pruning methods BIBREF12 , BIBREF20 that generally keep a fixed number of candidates, our pruning strategy is more efficient and flexible. Data The CoNLL-2012 shared task BIBREF21 corpus is used as the evaluation dataset, which is selected from the Ontonotes 5.0. Following conventional approaches BIBREF9 , BIBREF11 , for each pronoun in the document, we consider candidate $n$ from the previous two sentences and the current sentence. For pronouns, we consider two types of them following BIBREF9 , i.e., third personal pronoun (she, her, he, him, them, they, it) and possessive pronoun (his, hers, its, their, theirs). Table 1 reports the number of the two type pronouns and the overall statistics for the experimental dataset. According to our selection range of candidate $n$ , on average, each pronoun has 4.6 candidates and 1.3 correct references. Knowledge Types In this study, we use two types of knowledge in our experiments. The first type is linguistic features, i.e., plurality and animacy & gender. We employ the Stanford parser, which generates plurality, animacy, and gender markups for all the noun phrases, to annotate our data. Specifically, the plurality feature denotes each $n$ and $p$ to be singular or plural. For each candidate $n$ , if its plurality status is the same as the target pronoun, we label it 1, otherwise 0. The animacy & gender (AG) feature denotes whether a $n$ or $p$ is a living object, and being male, female, or neutral if it is alive. For each candidate $n$ , if its AG feature matches the target pronoun's, we label it 1, otherwise 0. The second type is the selectional preference (SP) knowledge. For this knowledge, we create a knowledge base by counting how many times a predicate-argument tuple appears in a corpus and use the resulted number to represent the preference strength. Specifically, we use the English Wikipedia as the base corpus for such counting. Then we parse the entire corpus through the Stanford parser and record all dependency edges in the format of (predicate, argument, relation, number), where predicate is the governor and argument the dependent in the original parsed dependency edge. Later for sentences in the training and test data, we firstly parse each sentence and find out the dependency edge linking $p$ and its corresponding predicate. Then for each candidate $n$ in a sentence, we check the previously created SP knowledge base and find out how many times it appears as the argument of different predicates with the same dependency relation (i.e., nsubj and dobj). The resulted frequency is grouped into the following buckets [1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+] and we use the bucket id as the final SP knowledge. Thus in the previous example: The dog is chasing the cat but it climbs the tree. Its parsing result indicates that `it' is the subject of the verb `climb'. Then for `the dog', `the cat', and `the tree', we check their associations with `climb' in the knowledge base and group them in the buckets to form the SP knowledge features. Baselines Several baselines are compared in this work. The first two are conventional unsupervised ones: [leftmargin=*] Recent Candidate, which simply selects the most recent noun phrase that appears in front of the target pronoun. Deterministic model BIBREF22 , which proposes one multi-pass seive model with human designed rules for the coreference resolution task. Besides the unsupervised models, we also compare with three representative supervised ones: [leftmargin=*] Statistical model, proposed by BIBREF23 , uses human-designed entity-level features between clusters and mentions for coreference resolution. Deep-RL model, proposed by BIBREF24 , a reinforcement learning method to directly optimize the coreference matrix instead of the traditional loss function. End2end is the current state-of-the-art coreference model BIBREF20 , which performs in an end-to-end manner and leverages both the contextual information and a pre-trained language model BIBREF25 . Note that the Deterministic, Statistical, and Deep-RL models are included in the Stanford CoreNLP toolkit, and experiments are conducted with their provided code. For End2end, we use their released code and replace its mention detection component with gold mentions for the fair comparison. To clearly show the effectiveness of the proposed model, we also present a variation of our model as an extra baseline to illustrate the effect of different knowledge incorporation manner: [leftmargin=*] Feature Concatenation, a simplified version of the complete model that removes the second knowledge processing layer, but directly treats all external knowledge embeddings as features and concatenates them to span representations. Implementation Following previous work BIBREF20 , we use the concatenation of the 300d GloVe embeddings BIBREF26 and the ELMo BIBREF25 embeddings as the initial word representations. Out-of-vocabulary words are initialized with zero vectors. Hyper-parameters are set as follows. The hidden state of the LSTM module is set to 200, and all the feed-forward networks in our model have two 150-dimension hidden layers. The default pruning threshold $t$ for softmax pruning is set to $10^{-7}$ . All linguistic features (plurality and AG) and external knowledge (SP) are encoded as 20-dimension embeddings. For model training, we use cross-entropy as the loss function and Adam BIBREF27 as the optimizer. All the aforementioned hyper-parameters are initialized randomly, and we apply dropout rate 0.2 to all hidden layers in the model. Our model treats a candidate as the correct reference if its predicted overall score $F(n,p)$ is larger than 0. The model training is performed with up to 100 epochs, and the best one is selected based on its performance on the development set. Experimental Results Table 2 compares the performance of our model with all baselines. Overall, our model performs the best with respect to all evaluation metrics. Several findings are also observed from the results. First, manually defined knowledge and features are not enough to cover rich contextual information. Deep learning models (e.g., End2end and our proposed models), which leverage text representations for context, outperform other approaches by a great margin, especially on the recall. Second, external knowledge is highly helpful in this task, which is supported by that our model outperforms the End2end model significantly. Moreover, the comparison between the two variants of our models is also interesting, where the final two-layer model outperforms the Feature Concatenation model. It proves that simply treating external knowledge as the feature, even though they are from the same sources, is not as effective as learning them in a joint framework. The reason behind this result is mainly from the noise in the knowledge source, e.g., parsing error, incorrectly identified relations, etc. For example, the plurality of 17% noun phrases are wrongly labeled in the test data. As a comparison, our knowledge attention might contribute to alleviate such noise when incorporating all knowledge sources. Effect of Different Knowledge To illustrate the importance of different knowledge sources and the knowledge attention mechanism, we ablate various components of our model and report the corresponding F1 scores on the test data. The results are shown in Table 3 , which clearly show the necessity of the knowledge. Interestingly, AG contributes the most among all knowledge types, which indicates that potentially more cases in the evaluation dataset demand on the AG knowledge than others. More importantly, the results also prove the effectiveness of the knowledge attention module, which contributes to the performance gap between our model and the Feature Concatenation one. Effect of Different Pruning Thresholds We try different thresholds $t$ for the softmax pruning in selecting reliable candidates. The effects of different thresholds on reducing candidates and overall performance are shown in Figure 5 and 6 respectively. Along with the increase of $t$ , both the max and the average number of pruned candidates drop quickly, so that the space complexity of the model can be reduced accordingly. Particularly, there are as much as 80% candidates can be filtered out when $t = 10^{-1}$ . Meanwhile, when referring to Figure 6 , it is observed that the model performs stable with the decreasing of candidate numbers. Not surprisingly, the precision rises when reducing candidate numbers, yet the recall drops dramatically, eventually results in the drop of F1. With the above observations, the reason we set $t = 10^{-7}$ as the default threshold is straightforward: on this value, one-third candidates are pruned with almost no influence on the model performance in terms of precision, recall, and the F1 score. Case Study To further demonstrate the effectiveness of incorporating knowledge into pronoun coreference resolution, two examples are provided for detailed analysis. The prediction results of the End2end model and our complete model are shown in Table 4 . There are different challenges in both examples. In Example A, `Jesus', `man', and `my son' are all similar (male) noun phrases matching the target pronoun `He'. The End2end model predicts all of them to be correct references because their context provides limited help in distinguishing them. In Example B, the distance between `an accident' and the pronoun `it' is too far. As a result, the `None' result from the End2end model indicates that the contextual information is not enough to make the decision. As a comparison, in our model, integrating external knowledge can help to solve such challenges, e.g., for Example A, SP knowledge helps when Plurality and AG cannot distinguish all candidates. To clearly illustrate how our model leverages the external knowledge, we visualize the knowledge attention of the correct reference against other candidates via heatmaps in Figure 7 . Two interesting observations are drawn from the visualization. First, given two candidates, if they are significantly different in one feature, our model tends to pay more attention to that feature. Take AG as an example, in Example A, the AG features of all candidates consistently match the pronoun `he' (all male/neutral). Thus the comparison between `my son' and all candidates pay no attention to the AG feature. While in Example B, the target pronoun `it' cannot describe human, thus 'father' and `friend' are 0 on the AG feature while `hospital' and `accident' are 1. As a result, the attention module emphasizes AG more than other knowledge types. Second, The importance of SP is clearly shown in these examples. In example A, Plurality and AG features cannot help, the attention module weights higher on SP because `son' appears 100 times as the argument of the parsed predicate `child' in the SP knowledge base, while other candidates appear much less at that position. In example B, as mentioned above, once AG helps filtering 'hospital' and 'accident', SP plays an important role in distinguishing them because `accident' appears 26 times in the SP knowledge base as the argument of the `fault' from the results of the parser, while `hospital' never appears at that position. Related Work Coreference resolution is a core task for natural language understanding, where it detects mention span and identifies coreference relations among them. As demonstrated in BIBREF12 , mention detection and coreference prediction are the two major focuses of the task. Different from the general coreference task, pronoun coreference resolution has its unique challenge since the semantics of pronouns are often not as clear as normal noun phrases, in general, how to leverage the context and external knowledge to resolve the coreference for pronouns becomes its focus BIBREF1 , BIBREF14 , BIBREF28 . In previous work, external knowledge including manually defined rules BIBREF1 , BIBREF9 , such as number/gender requirement of different pronouns, and world knowledge BIBREF14 , such as selectional preference BIBREF29 , BIBREF30 and eventuality knowledge BIBREF31 , have been proved to be helpful for pronoun coreference resolution. Recently, with the development of deep learning, BIBREF12 proposed an end-to-end model that learns contextual information with an LSTM module and proved that such knowledge is helpful for coreference resolution when the context is properly encoded. The aforementioned two types of knowledge have their own advantages: the contextual information covers diverse text expressions that are difficult to be predefined while the external knowledge is usually more precisely constructed and able to provide extra information beyond the training data. Different from previous work, we explore the possibility of joining the two types of knowledge for pronoun coreference resolution rather than use only one of them. To the best of our knowledge, this is the first attempt that uses deep learning model to incorporate contextual information and external knowledge for pronoun coreference resolution. Conclusion In this paper, we proposed a two-layer model for pronoun coreference resolution, where the first layer encodes contextual information and the second layer leverages external knowledge. Particularly, a knowledge attention mechanism is proposed to selectively leverage features from different knowledge sources. As an enhancement to existing methods, the proposed model combines the advantage of conventional feature-based models and deep learning models, so that context and external knowledge can be synchronously and effectively used for this task. Experimental results and case studies demonstrate the superiority of the proposed model to state-of-the-art baselines. Since the proposed model adopted an extensible structure, one possible future work is to explore the best way to enhance it with more complicated knowledge resources such as knowledge graphs. Acknowledgements This paper was partially supported by the Early Career Scheme (ECS, No.26206717) from Research Grants Council in Hong Kong. In addition, Hongming Zhang has been supported by the Hong Kong Ph.D. Fellowship and the Tencent Rhino-Bird Elite Training Program. We also thank the anonymous reviewers for their valuable comments and suggestions that help improving the quality of this paper.
counts of predicate-argument tuples from English Wikipedia
e9d882775a132e172eea68ab6ab4621a924bb6b8
e9d882775a132e172eea68ab6ab4621a924bb6b8_0
Q: Which of their proposed attention methods works better overall? Text: Introduction When performing network embedding, one maps network nodes into vector representations that reside in a low-dimensional latent space. Such techniques seek to encode topological information of the network into the embedding, such as affinity BIBREF0 , local interactions (e.g, local neighborhoods) BIBREF1 , and high-level properties such as community structure BIBREF2 . Relative to classical network-representation learning schemes BIBREF3 , network embeddings provide a more fine-grained representation that can be easily repurposed for other downstream applications (e.g., node classification, link prediction, content recommendation and anomaly detection). For real-world networks, one naturally may have access to rich side information about each node. Of particular interest are textual networks, where the side information comes in the form of natural language sequences BIBREF4 . For example, user profiles or their online posts on social networks (e.g., Facebook, Twitter), and documents in citation networks (e.g., Cora, arXiv). The integration of text information promises to significantly improve embeddings derived solely from the noisy, sparse edge representations BIBREF5 . Recent work has started to explore the joint embedding of network nodes and the associated text for abstracting more informative representations. BIBREF5 reformulated DeepWalk embedding as a matrix factorization problem, and fused text-embedding into the solution, while BIBREF6 augmented the network with documents as auxiliary nodes. Apart from direct embedding of the text content, one can first model the topics of the associated text BIBREF7 and then supply the predicted labels to facilitate embedding BIBREF8 . Many important downstream applications of network embeddings are context-dependent, since a static vector representation of the nodes adapts to the changing context less effectively BIBREF9 . For example, the interactions between social network users are context-dependent (e.g., family, work, interests), and contextualized user profiling can promote the specificity of recommendation systems. This motivates context-aware embedding techniques, such as CANE BIBREF9 , where the vector embedding dynamically depends on the context. For textual networks, the associated texts are natural candidates for context. CANE introduced a simple mutual attention weighting mechanism to derive context-aware dynamic embeddings for link prediction. Following the CANE setup, WANE BIBREF10 further improved the contextualized embedding, by considering fine-grained text alignment. Despite the promising results reported thus far, we identify three major limitations of existing context-aware network embedding solutions. First, mutual (or cross) attentions are computed from pairwise similarities between local text embeddings (word/phrase matching), whereas global sequence-level modeling is known to be more favorable across a wide range of NLP tasks BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Second, related to the above point, low-level affinity scores are directly used as mutual attention without considering any high-level parsing. Such an over-simplified operation denies desirable features, such as noise suppression and relational inference BIBREF15 , thereby compromising model performance. Third, mutual attention based on common similarity measures (e.g., cosine similarity) typically yields dense attention matrices, while psychological and computational evidence suggests a sparse attention mechanism functions more effectively BIBREF16 , BIBREF17 . Thus such naive similarity-based approaches can be suboptimal, since they are more likely to incorporate irrelevant word/phrase matching. This work represents an attempt to improve context-aware textual network embedding, by addressing the above issues. Our contributions include: ( INLINEFORM0 ) We present a principled and more-general formulation of the network embedding problem, under reproducing kernel Hilbert spaces (RKHS) learning; this formulation clarifies aspects of the existing literature and provides a flexible framework for future extensions. ( INLINEFORM1 ) A novel global sequence-level matching scheme is proposed, based on optimal transport, which matches key concepts between text sequences in a sparse attentive manner. ( INLINEFORM2 ) We develop a high-level attention-parsing mechanism that operates on top of low-level attention, which is capable of capturing long-term interactions and allows relational inference for better contextualization. We term our model Global Attention Network Embedding (GANE). To validate the effectiveness of GANE, we benchmarked our models against state-of-the-art counterparts on multiple datasets. Our models consistently outperform competing methods. Problem setup We introduce basic notation and definitions used in this work. Proposed Method Model framework overview To capture both the topological information (network structure INLINEFORM0 ) and the semantic information (text content INLINEFORM1 ) in the textual network embedding, we explicitly model two types of embeddings for each node INLINEFORM2 : ( INLINEFORM3 ) the topological embedding INLINEFORM4 , and ( INLINEFORM5 ) the semantic embedding INLINEFORM6 . The final embedding is constructed by concatenating the topological and semantic embeddings, i.e., INLINEFORM7 . We consider the topological embedding INLINEFORM8 as a static property of the node, fixed regardless of the context. On the other hand, the semantic embedding INLINEFORM9 dynamically depends on the context, which is the focus of this study. Motivated by the work of BIBREF9 , we consider the following probabilistic objective to train the network embeddings: DISPLAYFORM0 where INLINEFORM0 represents sampled edges from the network and INLINEFORM1 is the collection of model parameters. The edge loss INLINEFORM2 is given by the cross entropy DISPLAYFORM0 where INLINEFORM0 denotes the conditional likelihood of observing a (weighted) link between nodes INLINEFORM1 and INLINEFORM2 , with the latter serving as the context. More specifically, DISPLAYFORM0 where INLINEFORM0 is the normalizing constant and INLINEFORM1 is an inner product operation, to be defined momentarily. Note here we have suppressed the dependency on INLINEFORM2 to simplify notation. To capture both the topological and semantic information, along with their interactions, we propose to use the following decomposition for our inner product term: DISPLAYFORM0 Here we use INLINEFORM0 to denote the inner product evaluation between the two feature embeddings INLINEFORM1 and INLINEFORM2 , which can be defined by a semi-positive-definite kernel function INLINEFORM3 BIBREF26 , e.g., Euclidean kernel, Gaussian RBF, IMQ kernel, etc. Note that for INLINEFORM4 , INLINEFORM5 and INLINEFORM6 do not reside on the same feature space. As such, embeddings are first mapped to the same feature space for inner product evaluation. In this study, we use the Euclidean kernel INLINEFORM0 for inner product evaluation with INLINEFORM0 , and linear mapping INLINEFORM0 for feature space realignment with INLINEFORM0 . Here INLINEFORM1 is a trainable parameter, and throughout this paper we omit the bias terms in linear maps to avoid notational clutter. Note that our solution differs from existing network-embedding models in that: ( INLINEFORM0 ) our objective is a principled likelihood loss, while prior works heuristically combine the losses of four different models BIBREF9 , which may fail to capture the non-trivial interactions between the fixed and dynamic embeddings; and ( INLINEFORM1 ) we present a formal derivation of network embedding in a reproducing kernel Hilbert space. Direct optimization of ( EQREF9 ) requires summing over all nodes in the network, which can be computationally infeasible for large-scale networks. To alleviate this issue, we consider other more computationally efficient surrogate objectives. In particular, we adopt the negative sampling approach BIBREF27 , which replaces the bottleneck Softmax with a more tractable approximation given by DISPLAYFORM0 where INLINEFORM0 is the sigmoid function, and INLINEFORM1 is a noise distribution over the nodes. Negative sampling can be considered as a special variant of noise contrastive estimation BIBREF28 , which seeks to recover the ground-truth likelihood by contrasting data samples with noise samples, thereby bypassing the need to compute the normalizing constant. As the number of noise samples INLINEFORM2 goes to infinity, this approximation becomes exact BIBREF29 . Following the practice of BIBREF27 , we set our noise distribution to INLINEFORM3 , where INLINEFORM4 denotes the out-degree of node INLINEFORM5 . We argue that a key to the context-aware network embedding is the design of an effective attention mechanism, which cross-matches the relevant content between the node's associated text and the context. Over-simplified dot-product attention limits the potential of existing textual network embedding schemes. In the following sections, we present two novel, efficient attention designs that fulfill the desiderata listed in our Introduction. Our discussion follows the setup used in CANE BIBREF9 and WANE BIBREF10 , where the text from the interacting node is used as the context. Generalization to other forms of context is straightforward. Optimal-transport-based matching We first consider reformulating content matching as an optimal transport problem, and then re-purpose the transport plan as our attention score to aggregate context-dependent information. More specifically, we see a node's text and context as two (discrete) distributions over the content space. Related content will be matched in the sense that they yield a higher weight in the optimal transport plan INLINEFORM0 . The following two properties make the optimal transport plan more appealing for use as attention score. ( INLINEFORM1 ) Sparsity: when solved exactly, INLINEFORM2 is a sparse matrix with at most INLINEFORM3 non-zero elements, where INLINEFORM4 is the number of contents ( BIBREF30 , INLINEFORM5 ); ( INLINEFORM6 ) Self-normalized: row-sum and column-sum equal the respective marginal distributions. Implementation-wise, we first feed embedded text sequence INLINEFORM0 and context sequence INLINEFORM1 into our OT solver to compute the OT plan, DISPLAYFORM0 Note that here we treat pre-embedded sequence INLINEFORM0 as INLINEFORM1 point masses in the feature space, each with weight INLINEFORM2 , and similarly for INLINEFORM3 . Next we “transport” the semantic content from context INLINEFORM4 according to the estimated OT plan with matrix multiplication DISPLAYFORM0 where we have treated INLINEFORM0 as a INLINEFORM1 matrix. Intuitively, this operation aligns the context with the target text sequence via averaging the context semantic embeddings with respect to the OT plan for each content element in INLINEFORM2 . To finalize the contextualized embedding, we aggregate the information from both INLINEFORM3 and the aligned INLINEFORM4 with an operator INLINEFORM5 , DISPLAYFORM0 In this case, we practice the following simple aggregation strategy: first concatenate INLINEFORM0 and the aligned INLINEFORM1 along the feature dimension, and then take max-pooling along the temporal dimension to reduce the feature vector into a INLINEFORM2 vector, followed by a linear mapping to project the embedding vector to the desired dimensionality. Attention parsing Direct application of attention scores based on a low-level similarity-based matching criteria (e.g., dot-product attention) can be problematic in a number of ways: ( INLINEFORM0 ) low-level attention scores can be noisy (i.e., spurious matchings), and ( INLINEFORM1 ) similarity-matching does not allow relational inference. To better understand these points, consider the following cases. For ( INLINEFORM2 ), if the sequence embeddings used do not explicitly address the syntactic structure of the text, a relatively dense attention score matrix can be expected. For ( INLINEFORM3 ), consider the case when the context is a query, and the matching appears as a cue in the node's text data; then the information needed is actually in the vicinity rather than the exact matching location (e.g., shifted a few steps ahead). Inspired by the work of BIBREF31 , we propose a new mechanism called attention parsing to address the aforementioned issues. As the name suggests, attention parsing re-calibrates the raw low-level attention scores to better integrate the information. To this end, we conceptually treat the raw attention matrix INLINEFORM0 as a two-dimensional image and apply convolutional filters to it: DISPLAYFORM0 where INLINEFORM0 denotes the filter banks with INLINEFORM1 and INLINEFORM2 respectively as window sizes and channel number. We can stack more convolutional layers, break sequence embedding dimensions to allow multi-group (channel) low-level attention as input, or introduce more-sophisticated model architectures (e.g., ResNet BIBREF32 , Transformer BIBREF18 , etc.) to enhance our model. For now, we focus on the simplest model described above, for the sake of demonstration. With INLINEFORM0 as the high-level representation of attention, our next step is to reduce it to a weight vector to align information from the context INLINEFORM1 . We apply a max-pooling operation with respect to the context dimension, followed by a linear map to get the logits INLINEFORM2 of the weights DISPLAYFORM0 where INLINEFORM0 is the projection matrix. Then the parsed attention weight INLINEFORM1 is obtained by DISPLAYFORM0 which is used to compute the aligned context embedding DISPLAYFORM0 Note that here we compute a globally aligned context embedding vector INLINEFORM0 , rather than one for each location in INLINEFORM1 as described in the last section ( INLINEFORM2 ). In the subsequent aggregation operation, INLINEFORM3 is broadcasted to all the locations in INLINEFORM4 . We call this global alignment, to distinguish it from the local alignment strategy described in the last section. Both alignment strategies have their respective merits, and in practice they can be directly combined to produce the final context-aware embedding. Related Work Experiments Experimental setup We consider three benchmark datasets: ( INLINEFORM0 ) Cora, a paper citation network with text information, built by BIBREF44 . We prune the dataset so that it only has papers on the topic of machine learning. ( INLINEFORM1 ) Hepth, a paper citation network from Arxiv on high energy physics theory, with paper abstracts as text information. ( INLINEFORM2 ) Zhihu, a Q&A network dataset constructed by BIBREF9 , which has 10,000 active users with text descriptions and their collaboration links. Summary statistics of these three datasets are summarized in Table . Pre-processing protocols from prior studies are used for data preparation BIBREF10 , BIBREF34 , BIBREF9 . For quantitative evaluation, we tested our model on the following tasks: ( INLINEFORM0 ) Link prediction, where we deliberately mask out a portion of the edges to see if the embedding learned from the remaining edges can be used to accurately predict the missing edges. ( INLINEFORM1 ) Multi-label node classification, where we use the learned embedding to predict the labels associated with each node. Note that the label information is not used in our embedding. We also carried out ablation study to identify the gains. In addition to the quantitative results, we also visualized the embedding and the attention matrices to qualitatively verify our hypotheses. For the link prediction task, we adopt the area under the curve (AUC) score to evaluate the performance, AUC is employed to measure the probability that vertices in existing edges are more similar than those in the nonexistent edge. For each training ratio, the experiment is executed 10 times and the mean AUC scores are reported, where higher AUC indicates better performance. For multi-label classification, we evaluate the performance with Macro-F1 scores. The experiment for each training ratio is also executed 10 times and the average Macro-F1 scores are reported, where a higher value indicates better performance. To demonstrate the effectiveness of the proposed solutions, we evaluated our model along with the following strong baselines. ( INLINEFORM0 ) Topology only embeddings: MMB BIBREF45 , DeepWalk BIBREF1 , LINE BIBREF33 , Node2vec BIBREF46 . ( INLINEFORM1 ) Joint embedding of topology & text: Naive combination, TADW BIBREF5 , CENE BIBREF6 , CANE BIBREF9 , WANE BIBREF10 , DMTE BIBREF34 . A brief summary of these competing models is provided in the Supplementary Material (SM). Results We consider two variants of our model, denoted as GANE-OT and GANE-AP. GANE-OT employs the most basic OT-based attention model, specifically, global word-by-word alignment model; while GANE-AP additionally uses a one-layer convolutional neural network for the attention parsing. Detailed experimental setups are described in the SM. Tables and summarize the results from the link-prediction experiments on all three datasets, where a different ratio of edges are used for training. Results from models other than GANE are collected from BIBREF9 , BIBREF10 and BIBREF34 . We have also repeated these experiments on our own, and the results are consistent with the ones reported. Note that BIBREF34 did not report results on DMTE. Both GANE variants consistently outperform competing solutions. In the low-training-sample regime our solutions lead by a large margin, and the performance gap closes as the number of training samples increases. This indicates that our OT-based mutual attention framework can yield more informative textual representations than other methods. Note that GANE-AP delivers better results compared with GANE-OT, suggesting the attention parsing mechanism can further improve the low-level mutual attention matrix. More results on Cora and Hepth are provided in the SM. To further evaluate the effectiveness of our model, we consider multi-label vertex classification. Following the setup described in BIBREF9 , we first computed all context-aware embeddings. Then we averaged over each node's context-aware embeddings with all other connected nodes, to obtain a global embedding for each node, i.e., INLINEFORM0 , where INLINEFORM1 denotes the degree of node INLINEFORM2 . A linear SVM is employed, instead of a sophisticated deep classifier, to predict the label attribute of a node. We randomly sample a portion of labeled vertices with embeddings ( INLINEFORM3 ) to train the classifier, using the rest of the nodes to evaluate prediction accuracy. We compare our results with those from other state-of-the-art models in Table . The GANE models delivered better results compared with their counterparts, lending strong evidence that the OT attention and attention parsing mechanism promise to capture more meaningful representations. We further explore the effect of INLINEFORM0 -gram length in our model (i.e., the filter size for the covolutional layers used by the attention parsing module). In Figure FIGREF39 we plot the AUC scores for link prediction on the Cora dataset against varying INLINEFORM1 -gram length. The performance peaked around length 20, then starts to drop, indicating a moderate attention span is more preferable. Similar results are observed on other datasets (results not shown). Experimental details on the ablation study can be found in the SM. Qualitative Analysis We employed t-SNE BIBREF47 to project the network embeddings for the Cora dataset in a two-dimensional space using GANE-OT, with each node color coded according to its label. As shown in Figure FIGREF40 , papers clustered together belong to the same category, with the clusters well-separated from each other in the network embedding space. Note that our network embeddings are trained without any label information. Together with the label classification results, this implies our model is capable of extracting meaningful information from both context and network topological. To verify that our OT-based attention mechanism indeed produces sparse attention scores, we visualized the OT attention matrices and compared them with those simarlity-based attention matrices (e.g., WANE). Figure FIGREF44 plots one typical example. Our OT solver returns a sparse attention matrix, while dot-product-based WANE attention is effectively dense. This underscores the effectiveness of OT-based attention in terms of noise suppression. Conclusion We have proposed a novel and principled mutual-attention framework based on optimal transport (OT). Compared with existing solutions, the attention mechanisms employed by our GANE model enjoys the following benefits: (i) it is naturally sparse and self-normalized, (ii) it is a global sequence matching scheme, and (iii) it can capture long-term interactions between two sentences. These claims are supported by experimental evidence from link prediction and multi-label vertex classification. Looking forward, our attention mechanism can also be applied to tasks such as relational networks BIBREF15 , natural language inference BIBREF11 , and QA systems BIBREF48 . Acknowledgments This research was supported in part by DARPA, DOE, NIH, ONR and NSF. Appendix Competing models Topology only embeddings Mixed Membership Stochastic Blockmodel (MMB) BIBREF45 : a graphical model for relational data, each node randomly select a different "topic" when forming an edge. DeepWalk BIBREF1 : executes truncated random walks on the graph, and by treating nodes as tokens and random walks as natural language sequences, the node embedding are obtained using the SkipGram model BIBREF27 . Node2vec BIBREF46 : a variant of DeepWalk by executing biased random walks to explore the neighborhood (e.g., Breadth-first or Depth-first sampling). Large-scale Information Network Embedding (LINE) BIBREF33 : scalable network embedding scheme via maximizing the joint and conditional likelihoods. Joint embedding of topology & text Naive combination BIBREF9 : direct combination of the structure embedding and text embedding that best predicts edges. Text-Associated DeepWalk (TADW) BIBREF5 : reformulating embedding as a matrix factorization problem, and fused text-embedding into the solution. Content-Enhanced Network Embedding (CENE) BIBREF6 : treats texts as a special kind of nodes. Context-Aware Network Embedding (CANE) BIBREF9 : decompose the embedding into context-free and context-dependent part, use mutual attention to address the context-dependent embedding. Word-Alignment-based Network Embedding (WANE) BIBREF10 : Using fine-grained alignment to improve context-aware embedding. Diffusion Maps for Textual network Embedding (DMTE) BIBREF34 : using truncated diffusion maps to improve the context-free part embedding in CANE. Complete Link prediction results on Cora and Hepth The complete results for Cora and Hepth are listed in Tables and . Results from models other than GANE are collected from BIBREF9 , BIBREF10 , BIBREF34 . We have also repeated these experiments on our own, the results are consistent with the ones reported. Note that DMTE did not report results on Hepth BIBREF34 . Negative sampling approximation In this section we provide a quick justification for the negative sampling approximation. To this end, we first briefly review noise contrastive estimation (NCE) and how it connects to maximal likelihood estimation, then we establish the link to negative sampling. Interested readers are referred to BIBREF50 for a more thorough discussion on this topic. Noise contrastive estimation. NCE seeks to learn the parameters of a likelihood model INLINEFORM0 by optimizing the following discriminative objective: J() = uipd[p(y=1|ui,v) - K Eu'pn [p(y=0|u,v)]], where INLINEFORM1 is the label of whether INLINEFORM2 comes from the data distribution INLINEFORM3 or the tractable noise distribution INLINEFORM4 , and INLINEFORM5 is the context. Using the Monte Carlo estimator for the second term gives us J() = uipd[p(y=1|ui,v) - k=1K [p(y=0|uk,v)]], uk iid pn. Since the goal of INLINEFORM6 is to predict the label of a sample from a mixture distribution with INLINEFORM7 from INLINEFORM8 and INLINEFORM9 from INLINEFORM10 , plugging the model likelihood and noise likelihood into the label likelihood gives us p(y=1;u,v) = p(u|v)p(u|v) + K pn(u|v), p(y=0;u,v) = K pn(u|v)p(u|v) + K pn(u|v). Recall INLINEFORM11 takes the following softmax form DISPLAYFORM0 NCE treats INLINEFORM12 as an learnable parameter and optimized along with INLINEFORM13 . One key observation is that, in practice, one can safely clamp INLINEFORM14 to 1, and the NCE learned model ( INLINEFORM15 ) will self-normalize in the sense that INLINEFORM16 . As such, one can simply plug INLINEFORM17 into the above objective. Another key result is that, as INLINEFORM18 , the gradient of NCE objective recovers the gradient of softmax objective INLINEFORM19 BIBREF49 . Negative sampling as NCE. If we set INLINEFORM20 and let INLINEFORM21 be the uniform distribution on INLINEFORM22 , we have DISPLAYFORM1 where INLINEFORM23 is the sigmoid function. Plugging this back to the INLINEFORM24 covers the negative sampling objective Eqn (6) used in the paper. Combined with the discussion above, we know Eqn (6) provides a valid approximation to the INLINEFORM25 -likelihood in terms of the gradient directions, when INLINEFORM26 is sufficiently large. In this study, we use INLINEFORM27 negative sample for computational efficiency. Using more samples did not significantly improve our results (data not shown). Experiment Setup We use the same codebase from CANE BIBREF9 . The implementation is based on TensorFlow, all experiments are exectuted on a single NVIDIA TITAN X GPU. We set embedding dimension to INLINEFORM28 for all our experiments. To conduct a fair comparison with the baseline models, we follow the experiment setup from BIBREF10 . For all experiments, we set word embedding dimension as 100 trained from scratch. We train the model with Adam optimizer and set learning rate INLINEFORM29 . For GANE-AP model, we use best filte size INLINEFORM30 for convolution from our abalation study. Ablation study setup To test how the INLINEFORM31 -gram length affect our GANE-AP model performance, we re-run our model with different choices of INLINEFORM32 -gram length, namely, the window size in convolutional layer. Each experiment is repeated for 10 times and we report the averaged results to eliminate statistical fluctuations.
attention parsing
6367877c05beebfdbb31e83c1f25dfddf925b6b6
6367877c05beebfdbb31e83c1f25dfddf925b6b6_0
Q: Which dataset of texts do they use? Text: Introduction When performing network embedding, one maps network nodes into vector representations that reside in a low-dimensional latent space. Such techniques seek to encode topological information of the network into the embedding, such as affinity BIBREF0 , local interactions (e.g, local neighborhoods) BIBREF1 , and high-level properties such as community structure BIBREF2 . Relative to classical network-representation learning schemes BIBREF3 , network embeddings provide a more fine-grained representation that can be easily repurposed for other downstream applications (e.g., node classification, link prediction, content recommendation and anomaly detection). For real-world networks, one naturally may have access to rich side information about each node. Of particular interest are textual networks, where the side information comes in the form of natural language sequences BIBREF4 . For example, user profiles or their online posts on social networks (e.g., Facebook, Twitter), and documents in citation networks (e.g., Cora, arXiv). The integration of text information promises to significantly improve embeddings derived solely from the noisy, sparse edge representations BIBREF5 . Recent work has started to explore the joint embedding of network nodes and the associated text for abstracting more informative representations. BIBREF5 reformulated DeepWalk embedding as a matrix factorization problem, and fused text-embedding into the solution, while BIBREF6 augmented the network with documents as auxiliary nodes. Apart from direct embedding of the text content, one can first model the topics of the associated text BIBREF7 and then supply the predicted labels to facilitate embedding BIBREF8 . Many important downstream applications of network embeddings are context-dependent, since a static vector representation of the nodes adapts to the changing context less effectively BIBREF9 . For example, the interactions between social network users are context-dependent (e.g., family, work, interests), and contextualized user profiling can promote the specificity of recommendation systems. This motivates context-aware embedding techniques, such as CANE BIBREF9 , where the vector embedding dynamically depends on the context. For textual networks, the associated texts are natural candidates for context. CANE introduced a simple mutual attention weighting mechanism to derive context-aware dynamic embeddings for link prediction. Following the CANE setup, WANE BIBREF10 further improved the contextualized embedding, by considering fine-grained text alignment. Despite the promising results reported thus far, we identify three major limitations of existing context-aware network embedding solutions. First, mutual (or cross) attentions are computed from pairwise similarities between local text embeddings (word/phrase matching), whereas global sequence-level modeling is known to be more favorable across a wide range of NLP tasks BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Second, related to the above point, low-level affinity scores are directly used as mutual attention without considering any high-level parsing. Such an over-simplified operation denies desirable features, such as noise suppression and relational inference BIBREF15 , thereby compromising model performance. Third, mutual attention based on common similarity measures (e.g., cosine similarity) typically yields dense attention matrices, while psychological and computational evidence suggests a sparse attention mechanism functions more effectively BIBREF16 , BIBREF17 . Thus such naive similarity-based approaches can be suboptimal, since they are more likely to incorporate irrelevant word/phrase matching. This work represents an attempt to improve context-aware textual network embedding, by addressing the above issues. Our contributions include: ( INLINEFORM0 ) We present a principled and more-general formulation of the network embedding problem, under reproducing kernel Hilbert spaces (RKHS) learning; this formulation clarifies aspects of the existing literature and provides a flexible framework for future extensions. ( INLINEFORM1 ) A novel global sequence-level matching scheme is proposed, based on optimal transport, which matches key concepts between text sequences in a sparse attentive manner. ( INLINEFORM2 ) We develop a high-level attention-parsing mechanism that operates on top of low-level attention, which is capable of capturing long-term interactions and allows relational inference for better contextualization. We term our model Global Attention Network Embedding (GANE). To validate the effectiveness of GANE, we benchmarked our models against state-of-the-art counterparts on multiple datasets. Our models consistently outperform competing methods. Problem setup We introduce basic notation and definitions used in this work. Proposed Method Model framework overview To capture both the topological information (network structure INLINEFORM0 ) and the semantic information (text content INLINEFORM1 ) in the textual network embedding, we explicitly model two types of embeddings for each node INLINEFORM2 : ( INLINEFORM3 ) the topological embedding INLINEFORM4 , and ( INLINEFORM5 ) the semantic embedding INLINEFORM6 . The final embedding is constructed by concatenating the topological and semantic embeddings, i.e., INLINEFORM7 . We consider the topological embedding INLINEFORM8 as a static property of the node, fixed regardless of the context. On the other hand, the semantic embedding INLINEFORM9 dynamically depends on the context, which is the focus of this study. Motivated by the work of BIBREF9 , we consider the following probabilistic objective to train the network embeddings: DISPLAYFORM0 where INLINEFORM0 represents sampled edges from the network and INLINEFORM1 is the collection of model parameters. The edge loss INLINEFORM2 is given by the cross entropy DISPLAYFORM0 where INLINEFORM0 denotes the conditional likelihood of observing a (weighted) link between nodes INLINEFORM1 and INLINEFORM2 , with the latter serving as the context. More specifically, DISPLAYFORM0 where INLINEFORM0 is the normalizing constant and INLINEFORM1 is an inner product operation, to be defined momentarily. Note here we have suppressed the dependency on INLINEFORM2 to simplify notation. To capture both the topological and semantic information, along with their interactions, we propose to use the following decomposition for our inner product term: DISPLAYFORM0 Here we use INLINEFORM0 to denote the inner product evaluation between the two feature embeddings INLINEFORM1 and INLINEFORM2 , which can be defined by a semi-positive-definite kernel function INLINEFORM3 BIBREF26 , e.g., Euclidean kernel, Gaussian RBF, IMQ kernel, etc. Note that for INLINEFORM4 , INLINEFORM5 and INLINEFORM6 do not reside on the same feature space. As such, embeddings are first mapped to the same feature space for inner product evaluation. In this study, we use the Euclidean kernel INLINEFORM0 for inner product evaluation with INLINEFORM0 , and linear mapping INLINEFORM0 for feature space realignment with INLINEFORM0 . Here INLINEFORM1 is a trainable parameter, and throughout this paper we omit the bias terms in linear maps to avoid notational clutter. Note that our solution differs from existing network-embedding models in that: ( INLINEFORM0 ) our objective is a principled likelihood loss, while prior works heuristically combine the losses of four different models BIBREF9 , which may fail to capture the non-trivial interactions between the fixed and dynamic embeddings; and ( INLINEFORM1 ) we present a formal derivation of network embedding in a reproducing kernel Hilbert space. Direct optimization of ( EQREF9 ) requires summing over all nodes in the network, which can be computationally infeasible for large-scale networks. To alleviate this issue, we consider other more computationally efficient surrogate objectives. In particular, we adopt the negative sampling approach BIBREF27 , which replaces the bottleneck Softmax with a more tractable approximation given by DISPLAYFORM0 where INLINEFORM0 is the sigmoid function, and INLINEFORM1 is a noise distribution over the nodes. Negative sampling can be considered as a special variant of noise contrastive estimation BIBREF28 , which seeks to recover the ground-truth likelihood by contrasting data samples with noise samples, thereby bypassing the need to compute the normalizing constant. As the number of noise samples INLINEFORM2 goes to infinity, this approximation becomes exact BIBREF29 . Following the practice of BIBREF27 , we set our noise distribution to INLINEFORM3 , where INLINEFORM4 denotes the out-degree of node INLINEFORM5 . We argue that a key to the context-aware network embedding is the design of an effective attention mechanism, which cross-matches the relevant content between the node's associated text and the context. Over-simplified dot-product attention limits the potential of existing textual network embedding schemes. In the following sections, we present two novel, efficient attention designs that fulfill the desiderata listed in our Introduction. Our discussion follows the setup used in CANE BIBREF9 and WANE BIBREF10 , where the text from the interacting node is used as the context. Generalization to other forms of context is straightforward. Optimal-transport-based matching We first consider reformulating content matching as an optimal transport problem, and then re-purpose the transport plan as our attention score to aggregate context-dependent information. More specifically, we see a node's text and context as two (discrete) distributions over the content space. Related content will be matched in the sense that they yield a higher weight in the optimal transport plan INLINEFORM0 . The following two properties make the optimal transport plan more appealing for use as attention score. ( INLINEFORM1 ) Sparsity: when solved exactly, INLINEFORM2 is a sparse matrix with at most INLINEFORM3 non-zero elements, where INLINEFORM4 is the number of contents ( BIBREF30 , INLINEFORM5 ); ( INLINEFORM6 ) Self-normalized: row-sum and column-sum equal the respective marginal distributions. Implementation-wise, we first feed embedded text sequence INLINEFORM0 and context sequence INLINEFORM1 into our OT solver to compute the OT plan, DISPLAYFORM0 Note that here we treat pre-embedded sequence INLINEFORM0 as INLINEFORM1 point masses in the feature space, each with weight INLINEFORM2 , and similarly for INLINEFORM3 . Next we “transport” the semantic content from context INLINEFORM4 according to the estimated OT plan with matrix multiplication DISPLAYFORM0 where we have treated INLINEFORM0 as a INLINEFORM1 matrix. Intuitively, this operation aligns the context with the target text sequence via averaging the context semantic embeddings with respect to the OT plan for each content element in INLINEFORM2 . To finalize the contextualized embedding, we aggregate the information from both INLINEFORM3 and the aligned INLINEFORM4 with an operator INLINEFORM5 , DISPLAYFORM0 In this case, we practice the following simple aggregation strategy: first concatenate INLINEFORM0 and the aligned INLINEFORM1 along the feature dimension, and then take max-pooling along the temporal dimension to reduce the feature vector into a INLINEFORM2 vector, followed by a linear mapping to project the embedding vector to the desired dimensionality. Attention parsing Direct application of attention scores based on a low-level similarity-based matching criteria (e.g., dot-product attention) can be problematic in a number of ways: ( INLINEFORM0 ) low-level attention scores can be noisy (i.e., spurious matchings), and ( INLINEFORM1 ) similarity-matching does not allow relational inference. To better understand these points, consider the following cases. For ( INLINEFORM2 ), if the sequence embeddings used do not explicitly address the syntactic structure of the text, a relatively dense attention score matrix can be expected. For ( INLINEFORM3 ), consider the case when the context is a query, and the matching appears as a cue in the node's text data; then the information needed is actually in the vicinity rather than the exact matching location (e.g., shifted a few steps ahead). Inspired by the work of BIBREF31 , we propose a new mechanism called attention parsing to address the aforementioned issues. As the name suggests, attention parsing re-calibrates the raw low-level attention scores to better integrate the information. To this end, we conceptually treat the raw attention matrix INLINEFORM0 as a two-dimensional image and apply convolutional filters to it: DISPLAYFORM0 where INLINEFORM0 denotes the filter banks with INLINEFORM1 and INLINEFORM2 respectively as window sizes and channel number. We can stack more convolutional layers, break sequence embedding dimensions to allow multi-group (channel) low-level attention as input, or introduce more-sophisticated model architectures (e.g., ResNet BIBREF32 , Transformer BIBREF18 , etc.) to enhance our model. For now, we focus on the simplest model described above, for the sake of demonstration. With INLINEFORM0 as the high-level representation of attention, our next step is to reduce it to a weight vector to align information from the context INLINEFORM1 . We apply a max-pooling operation with respect to the context dimension, followed by a linear map to get the logits INLINEFORM2 of the weights DISPLAYFORM0 where INLINEFORM0 is the projection matrix. Then the parsed attention weight INLINEFORM1 is obtained by DISPLAYFORM0 which is used to compute the aligned context embedding DISPLAYFORM0 Note that here we compute a globally aligned context embedding vector INLINEFORM0 , rather than one for each location in INLINEFORM1 as described in the last section ( INLINEFORM2 ). In the subsequent aggregation operation, INLINEFORM3 is broadcasted to all the locations in INLINEFORM4 . We call this global alignment, to distinguish it from the local alignment strategy described in the last section. Both alignment strategies have their respective merits, and in practice they can be directly combined to produce the final context-aware embedding. Related Work Experiments Experimental setup We consider three benchmark datasets: ( INLINEFORM0 ) Cora, a paper citation network with text information, built by BIBREF44 . We prune the dataset so that it only has papers on the topic of machine learning. ( INLINEFORM1 ) Hepth, a paper citation network from Arxiv on high energy physics theory, with paper abstracts as text information. ( INLINEFORM2 ) Zhihu, a Q&A network dataset constructed by BIBREF9 , which has 10,000 active users with text descriptions and their collaboration links. Summary statistics of these three datasets are summarized in Table . Pre-processing protocols from prior studies are used for data preparation BIBREF10 , BIBREF34 , BIBREF9 . For quantitative evaluation, we tested our model on the following tasks: ( INLINEFORM0 ) Link prediction, where we deliberately mask out a portion of the edges to see if the embedding learned from the remaining edges can be used to accurately predict the missing edges. ( INLINEFORM1 ) Multi-label node classification, where we use the learned embedding to predict the labels associated with each node. Note that the label information is not used in our embedding. We also carried out ablation study to identify the gains. In addition to the quantitative results, we also visualized the embedding and the attention matrices to qualitatively verify our hypotheses. For the link prediction task, we adopt the area under the curve (AUC) score to evaluate the performance, AUC is employed to measure the probability that vertices in existing edges are more similar than those in the nonexistent edge. For each training ratio, the experiment is executed 10 times and the mean AUC scores are reported, where higher AUC indicates better performance. For multi-label classification, we evaluate the performance with Macro-F1 scores. The experiment for each training ratio is also executed 10 times and the average Macro-F1 scores are reported, where a higher value indicates better performance. To demonstrate the effectiveness of the proposed solutions, we evaluated our model along with the following strong baselines. ( INLINEFORM0 ) Topology only embeddings: MMB BIBREF45 , DeepWalk BIBREF1 , LINE BIBREF33 , Node2vec BIBREF46 . ( INLINEFORM1 ) Joint embedding of topology & text: Naive combination, TADW BIBREF5 , CENE BIBREF6 , CANE BIBREF9 , WANE BIBREF10 , DMTE BIBREF34 . A brief summary of these competing models is provided in the Supplementary Material (SM). Results We consider two variants of our model, denoted as GANE-OT and GANE-AP. GANE-OT employs the most basic OT-based attention model, specifically, global word-by-word alignment model; while GANE-AP additionally uses a one-layer convolutional neural network for the attention parsing. Detailed experimental setups are described in the SM. Tables and summarize the results from the link-prediction experiments on all three datasets, where a different ratio of edges are used for training. Results from models other than GANE are collected from BIBREF9 , BIBREF10 and BIBREF34 . We have also repeated these experiments on our own, and the results are consistent with the ones reported. Note that BIBREF34 did not report results on DMTE. Both GANE variants consistently outperform competing solutions. In the low-training-sample regime our solutions lead by a large margin, and the performance gap closes as the number of training samples increases. This indicates that our OT-based mutual attention framework can yield more informative textual representations than other methods. Note that GANE-AP delivers better results compared with GANE-OT, suggesting the attention parsing mechanism can further improve the low-level mutual attention matrix. More results on Cora and Hepth are provided in the SM. To further evaluate the effectiveness of our model, we consider multi-label vertex classification. Following the setup described in BIBREF9 , we first computed all context-aware embeddings. Then we averaged over each node's context-aware embeddings with all other connected nodes, to obtain a global embedding for each node, i.e., INLINEFORM0 , where INLINEFORM1 denotes the degree of node INLINEFORM2 . A linear SVM is employed, instead of a sophisticated deep classifier, to predict the label attribute of a node. We randomly sample a portion of labeled vertices with embeddings ( INLINEFORM3 ) to train the classifier, using the rest of the nodes to evaluate prediction accuracy. We compare our results with those from other state-of-the-art models in Table . The GANE models delivered better results compared with their counterparts, lending strong evidence that the OT attention and attention parsing mechanism promise to capture more meaningful representations. We further explore the effect of INLINEFORM0 -gram length in our model (i.e., the filter size for the covolutional layers used by the attention parsing module). In Figure FIGREF39 we plot the AUC scores for link prediction on the Cora dataset against varying INLINEFORM1 -gram length. The performance peaked around length 20, then starts to drop, indicating a moderate attention span is more preferable. Similar results are observed on other datasets (results not shown). Experimental details on the ablation study can be found in the SM. Qualitative Analysis We employed t-SNE BIBREF47 to project the network embeddings for the Cora dataset in a two-dimensional space using GANE-OT, with each node color coded according to its label. As shown in Figure FIGREF40 , papers clustered together belong to the same category, with the clusters well-separated from each other in the network embedding space. Note that our network embeddings are trained without any label information. Together with the label classification results, this implies our model is capable of extracting meaningful information from both context and network topological. To verify that our OT-based attention mechanism indeed produces sparse attention scores, we visualized the OT attention matrices and compared them with those simarlity-based attention matrices (e.g., WANE). Figure FIGREF44 plots one typical example. Our OT solver returns a sparse attention matrix, while dot-product-based WANE attention is effectively dense. This underscores the effectiveness of OT-based attention in terms of noise suppression. Conclusion We have proposed a novel and principled mutual-attention framework based on optimal transport (OT). Compared with existing solutions, the attention mechanisms employed by our GANE model enjoys the following benefits: (i) it is naturally sparse and self-normalized, (ii) it is a global sequence matching scheme, and (iii) it can capture long-term interactions between two sentences. These claims are supported by experimental evidence from link prediction and multi-label vertex classification. Looking forward, our attention mechanism can also be applied to tasks such as relational networks BIBREF15 , natural language inference BIBREF11 , and QA systems BIBREF48 . Acknowledgments This research was supported in part by DARPA, DOE, NIH, ONR and NSF. Appendix Competing models Topology only embeddings Mixed Membership Stochastic Blockmodel (MMB) BIBREF45 : a graphical model for relational data, each node randomly select a different "topic" when forming an edge. DeepWalk BIBREF1 : executes truncated random walks on the graph, and by treating nodes as tokens and random walks as natural language sequences, the node embedding are obtained using the SkipGram model BIBREF27 . Node2vec BIBREF46 : a variant of DeepWalk by executing biased random walks to explore the neighborhood (e.g., Breadth-first or Depth-first sampling). Large-scale Information Network Embedding (LINE) BIBREF33 : scalable network embedding scheme via maximizing the joint and conditional likelihoods. Joint embedding of topology & text Naive combination BIBREF9 : direct combination of the structure embedding and text embedding that best predicts edges. Text-Associated DeepWalk (TADW) BIBREF5 : reformulating embedding as a matrix factorization problem, and fused text-embedding into the solution. Content-Enhanced Network Embedding (CENE) BIBREF6 : treats texts as a special kind of nodes. Context-Aware Network Embedding (CANE) BIBREF9 : decompose the embedding into context-free and context-dependent part, use mutual attention to address the context-dependent embedding. Word-Alignment-based Network Embedding (WANE) BIBREF10 : Using fine-grained alignment to improve context-aware embedding. Diffusion Maps for Textual network Embedding (DMTE) BIBREF34 : using truncated diffusion maps to improve the context-free part embedding in CANE. Complete Link prediction results on Cora and Hepth The complete results for Cora and Hepth are listed in Tables and . Results from models other than GANE are collected from BIBREF9 , BIBREF10 , BIBREF34 . We have also repeated these experiments on our own, the results are consistent with the ones reported. Note that DMTE did not report results on Hepth BIBREF34 . Negative sampling approximation In this section we provide a quick justification for the negative sampling approximation. To this end, we first briefly review noise contrastive estimation (NCE) and how it connects to maximal likelihood estimation, then we establish the link to negative sampling. Interested readers are referred to BIBREF50 for a more thorough discussion on this topic. Noise contrastive estimation. NCE seeks to learn the parameters of a likelihood model INLINEFORM0 by optimizing the following discriminative objective: J() = uipd[p(y=1|ui,v) - K Eu'pn [p(y=0|u,v)]], where INLINEFORM1 is the label of whether INLINEFORM2 comes from the data distribution INLINEFORM3 or the tractable noise distribution INLINEFORM4 , and INLINEFORM5 is the context. Using the Monte Carlo estimator for the second term gives us J() = uipd[p(y=1|ui,v) - k=1K [p(y=0|uk,v)]], uk iid pn. Since the goal of INLINEFORM6 is to predict the label of a sample from a mixture distribution with INLINEFORM7 from INLINEFORM8 and INLINEFORM9 from INLINEFORM10 , plugging the model likelihood and noise likelihood into the label likelihood gives us p(y=1;u,v) = p(u|v)p(u|v) + K pn(u|v), p(y=0;u,v) = K pn(u|v)p(u|v) + K pn(u|v). Recall INLINEFORM11 takes the following softmax form DISPLAYFORM0 NCE treats INLINEFORM12 as an learnable parameter and optimized along with INLINEFORM13 . One key observation is that, in practice, one can safely clamp INLINEFORM14 to 1, and the NCE learned model ( INLINEFORM15 ) will self-normalize in the sense that INLINEFORM16 . As such, one can simply plug INLINEFORM17 into the above objective. Another key result is that, as INLINEFORM18 , the gradient of NCE objective recovers the gradient of softmax objective INLINEFORM19 BIBREF49 . Negative sampling as NCE. If we set INLINEFORM20 and let INLINEFORM21 be the uniform distribution on INLINEFORM22 , we have DISPLAYFORM1 where INLINEFORM23 is the sigmoid function. Plugging this back to the INLINEFORM24 covers the negative sampling objective Eqn (6) used in the paper. Combined with the discussion above, we know Eqn (6) provides a valid approximation to the INLINEFORM25 -likelihood in terms of the gradient directions, when INLINEFORM26 is sufficiently large. In this study, we use INLINEFORM27 negative sample for computational efficiency. Using more samples did not significantly improve our results (data not shown). Experiment Setup We use the same codebase from CANE BIBREF9 . The implementation is based on TensorFlow, all experiments are exectuted on a single NVIDIA TITAN X GPU. We set embedding dimension to INLINEFORM28 for all our experiments. To conduct a fair comparison with the baseline models, we follow the experiment setup from BIBREF10 . For all experiments, we set word embedding dimension as 100 trained from scratch. We train the model with Adam optimizer and set learning rate INLINEFORM29 . For GANE-AP model, we use best filte size INLINEFORM30 for convolution from our abalation study. Ablation study setup To test how the INLINEFORM31 -gram length affect our GANE-AP model performance, we re-run our model with different choices of INLINEFORM32 -gram length, namely, the window size in convolutional layer. Each experiment is repeated for 10 times and we report the averaged results to eliminate statistical fluctuations.
Cora, Hepth, Zhihu
d151327c93b67928313f8fad8079a4ff9ef89314
d151327c93b67928313f8fad8079a4ff9ef89314_0
Q: Do they measure how well they perform on longer sequences specifically? Text: Introduction When performing network embedding, one maps network nodes into vector representations that reside in a low-dimensional latent space. Such techniques seek to encode topological information of the network into the embedding, such as affinity BIBREF0 , local interactions (e.g, local neighborhoods) BIBREF1 , and high-level properties such as community structure BIBREF2 . Relative to classical network-representation learning schemes BIBREF3 , network embeddings provide a more fine-grained representation that can be easily repurposed for other downstream applications (e.g., node classification, link prediction, content recommendation and anomaly detection). For real-world networks, one naturally may have access to rich side information about each node. Of particular interest are textual networks, where the side information comes in the form of natural language sequences BIBREF4 . For example, user profiles or their online posts on social networks (e.g., Facebook, Twitter), and documents in citation networks (e.g., Cora, arXiv). The integration of text information promises to significantly improve embeddings derived solely from the noisy, sparse edge representations BIBREF5 . Recent work has started to explore the joint embedding of network nodes and the associated text for abstracting more informative representations. BIBREF5 reformulated DeepWalk embedding as a matrix factorization problem, and fused text-embedding into the solution, while BIBREF6 augmented the network with documents as auxiliary nodes. Apart from direct embedding of the text content, one can first model the topics of the associated text BIBREF7 and then supply the predicted labels to facilitate embedding BIBREF8 . Many important downstream applications of network embeddings are context-dependent, since a static vector representation of the nodes adapts to the changing context less effectively BIBREF9 . For example, the interactions between social network users are context-dependent (e.g., family, work, interests), and contextualized user profiling can promote the specificity of recommendation systems. This motivates context-aware embedding techniques, such as CANE BIBREF9 , where the vector embedding dynamically depends on the context. For textual networks, the associated texts are natural candidates for context. CANE introduced a simple mutual attention weighting mechanism to derive context-aware dynamic embeddings for link prediction. Following the CANE setup, WANE BIBREF10 further improved the contextualized embedding, by considering fine-grained text alignment. Despite the promising results reported thus far, we identify three major limitations of existing context-aware network embedding solutions. First, mutual (or cross) attentions are computed from pairwise similarities between local text embeddings (word/phrase matching), whereas global sequence-level modeling is known to be more favorable across a wide range of NLP tasks BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Second, related to the above point, low-level affinity scores are directly used as mutual attention without considering any high-level parsing. Such an over-simplified operation denies desirable features, such as noise suppression and relational inference BIBREF15 , thereby compromising model performance. Third, mutual attention based on common similarity measures (e.g., cosine similarity) typically yields dense attention matrices, while psychological and computational evidence suggests a sparse attention mechanism functions more effectively BIBREF16 , BIBREF17 . Thus such naive similarity-based approaches can be suboptimal, since they are more likely to incorporate irrelevant word/phrase matching. This work represents an attempt to improve context-aware textual network embedding, by addressing the above issues. Our contributions include: ( INLINEFORM0 ) We present a principled and more-general formulation of the network embedding problem, under reproducing kernel Hilbert spaces (RKHS) learning; this formulation clarifies aspects of the existing literature and provides a flexible framework for future extensions. ( INLINEFORM1 ) A novel global sequence-level matching scheme is proposed, based on optimal transport, which matches key concepts between text sequences in a sparse attentive manner. ( INLINEFORM2 ) We develop a high-level attention-parsing mechanism that operates on top of low-level attention, which is capable of capturing long-term interactions and allows relational inference for better contextualization. We term our model Global Attention Network Embedding (GANE). To validate the effectiveness of GANE, we benchmarked our models against state-of-the-art counterparts on multiple datasets. Our models consistently outperform competing methods. Problem setup We introduce basic notation and definitions used in this work. Proposed Method Model framework overview To capture both the topological information (network structure INLINEFORM0 ) and the semantic information (text content INLINEFORM1 ) in the textual network embedding, we explicitly model two types of embeddings for each node INLINEFORM2 : ( INLINEFORM3 ) the topological embedding INLINEFORM4 , and ( INLINEFORM5 ) the semantic embedding INLINEFORM6 . The final embedding is constructed by concatenating the topological and semantic embeddings, i.e., INLINEFORM7 . We consider the topological embedding INLINEFORM8 as a static property of the node, fixed regardless of the context. On the other hand, the semantic embedding INLINEFORM9 dynamically depends on the context, which is the focus of this study. Motivated by the work of BIBREF9 , we consider the following probabilistic objective to train the network embeddings: DISPLAYFORM0 where INLINEFORM0 represents sampled edges from the network and INLINEFORM1 is the collection of model parameters. The edge loss INLINEFORM2 is given by the cross entropy DISPLAYFORM0 where INLINEFORM0 denotes the conditional likelihood of observing a (weighted) link between nodes INLINEFORM1 and INLINEFORM2 , with the latter serving as the context. More specifically, DISPLAYFORM0 where INLINEFORM0 is the normalizing constant and INLINEFORM1 is an inner product operation, to be defined momentarily. Note here we have suppressed the dependency on INLINEFORM2 to simplify notation. To capture both the topological and semantic information, along with their interactions, we propose to use the following decomposition for our inner product term: DISPLAYFORM0 Here we use INLINEFORM0 to denote the inner product evaluation between the two feature embeddings INLINEFORM1 and INLINEFORM2 , which can be defined by a semi-positive-definite kernel function INLINEFORM3 BIBREF26 , e.g., Euclidean kernel, Gaussian RBF, IMQ kernel, etc. Note that for INLINEFORM4 , INLINEFORM5 and INLINEFORM6 do not reside on the same feature space. As such, embeddings are first mapped to the same feature space for inner product evaluation. In this study, we use the Euclidean kernel INLINEFORM0 for inner product evaluation with INLINEFORM0 , and linear mapping INLINEFORM0 for feature space realignment with INLINEFORM0 . Here INLINEFORM1 is a trainable parameter, and throughout this paper we omit the bias terms in linear maps to avoid notational clutter. Note that our solution differs from existing network-embedding models in that: ( INLINEFORM0 ) our objective is a principled likelihood loss, while prior works heuristically combine the losses of four different models BIBREF9 , which may fail to capture the non-trivial interactions between the fixed and dynamic embeddings; and ( INLINEFORM1 ) we present a formal derivation of network embedding in a reproducing kernel Hilbert space. Direct optimization of ( EQREF9 ) requires summing over all nodes in the network, which can be computationally infeasible for large-scale networks. To alleviate this issue, we consider other more computationally efficient surrogate objectives. In particular, we adopt the negative sampling approach BIBREF27 , which replaces the bottleneck Softmax with a more tractable approximation given by DISPLAYFORM0 where INLINEFORM0 is the sigmoid function, and INLINEFORM1 is a noise distribution over the nodes. Negative sampling can be considered as a special variant of noise contrastive estimation BIBREF28 , which seeks to recover the ground-truth likelihood by contrasting data samples with noise samples, thereby bypassing the need to compute the normalizing constant. As the number of noise samples INLINEFORM2 goes to infinity, this approximation becomes exact BIBREF29 . Following the practice of BIBREF27 , we set our noise distribution to INLINEFORM3 , where INLINEFORM4 denotes the out-degree of node INLINEFORM5 . We argue that a key to the context-aware network embedding is the design of an effective attention mechanism, which cross-matches the relevant content between the node's associated text and the context. Over-simplified dot-product attention limits the potential of existing textual network embedding schemes. In the following sections, we present two novel, efficient attention designs that fulfill the desiderata listed in our Introduction. Our discussion follows the setup used in CANE BIBREF9 and WANE BIBREF10 , where the text from the interacting node is used as the context. Generalization to other forms of context is straightforward. Optimal-transport-based matching We first consider reformulating content matching as an optimal transport problem, and then re-purpose the transport plan as our attention score to aggregate context-dependent information. More specifically, we see a node's text and context as two (discrete) distributions over the content space. Related content will be matched in the sense that they yield a higher weight in the optimal transport plan INLINEFORM0 . The following two properties make the optimal transport plan more appealing for use as attention score. ( INLINEFORM1 ) Sparsity: when solved exactly, INLINEFORM2 is a sparse matrix with at most INLINEFORM3 non-zero elements, where INLINEFORM4 is the number of contents ( BIBREF30 , INLINEFORM5 ); ( INLINEFORM6 ) Self-normalized: row-sum and column-sum equal the respective marginal distributions. Implementation-wise, we first feed embedded text sequence INLINEFORM0 and context sequence INLINEFORM1 into our OT solver to compute the OT plan, DISPLAYFORM0 Note that here we treat pre-embedded sequence INLINEFORM0 as INLINEFORM1 point masses in the feature space, each with weight INLINEFORM2 , and similarly for INLINEFORM3 . Next we “transport” the semantic content from context INLINEFORM4 according to the estimated OT plan with matrix multiplication DISPLAYFORM0 where we have treated INLINEFORM0 as a INLINEFORM1 matrix. Intuitively, this operation aligns the context with the target text sequence via averaging the context semantic embeddings with respect to the OT plan for each content element in INLINEFORM2 . To finalize the contextualized embedding, we aggregate the information from both INLINEFORM3 and the aligned INLINEFORM4 with an operator INLINEFORM5 , DISPLAYFORM0 In this case, we practice the following simple aggregation strategy: first concatenate INLINEFORM0 and the aligned INLINEFORM1 along the feature dimension, and then take max-pooling along the temporal dimension to reduce the feature vector into a INLINEFORM2 vector, followed by a linear mapping to project the embedding vector to the desired dimensionality. Attention parsing Direct application of attention scores based on a low-level similarity-based matching criteria (e.g., dot-product attention) can be problematic in a number of ways: ( INLINEFORM0 ) low-level attention scores can be noisy (i.e., spurious matchings), and ( INLINEFORM1 ) similarity-matching does not allow relational inference. To better understand these points, consider the following cases. For ( INLINEFORM2 ), if the sequence embeddings used do not explicitly address the syntactic structure of the text, a relatively dense attention score matrix can be expected. For ( INLINEFORM3 ), consider the case when the context is a query, and the matching appears as a cue in the node's text data; then the information needed is actually in the vicinity rather than the exact matching location (e.g., shifted a few steps ahead). Inspired by the work of BIBREF31 , we propose a new mechanism called attention parsing to address the aforementioned issues. As the name suggests, attention parsing re-calibrates the raw low-level attention scores to better integrate the information. To this end, we conceptually treat the raw attention matrix INLINEFORM0 as a two-dimensional image and apply convolutional filters to it: DISPLAYFORM0 where INLINEFORM0 denotes the filter banks with INLINEFORM1 and INLINEFORM2 respectively as window sizes and channel number. We can stack more convolutional layers, break sequence embedding dimensions to allow multi-group (channel) low-level attention as input, or introduce more-sophisticated model architectures (e.g., ResNet BIBREF32 , Transformer BIBREF18 , etc.) to enhance our model. For now, we focus on the simplest model described above, for the sake of demonstration. With INLINEFORM0 as the high-level representation of attention, our next step is to reduce it to a weight vector to align information from the context INLINEFORM1 . We apply a max-pooling operation with respect to the context dimension, followed by a linear map to get the logits INLINEFORM2 of the weights DISPLAYFORM0 where INLINEFORM0 is the projection matrix. Then the parsed attention weight INLINEFORM1 is obtained by DISPLAYFORM0 which is used to compute the aligned context embedding DISPLAYFORM0 Note that here we compute a globally aligned context embedding vector INLINEFORM0 , rather than one for each location in INLINEFORM1 as described in the last section ( INLINEFORM2 ). In the subsequent aggregation operation, INLINEFORM3 is broadcasted to all the locations in INLINEFORM4 . We call this global alignment, to distinguish it from the local alignment strategy described in the last section. Both alignment strategies have their respective merits, and in practice they can be directly combined to produce the final context-aware embedding. Related Work Experiments Experimental setup We consider three benchmark datasets: ( INLINEFORM0 ) Cora, a paper citation network with text information, built by BIBREF44 . We prune the dataset so that it only has papers on the topic of machine learning. ( INLINEFORM1 ) Hepth, a paper citation network from Arxiv on high energy physics theory, with paper abstracts as text information. ( INLINEFORM2 ) Zhihu, a Q&A network dataset constructed by BIBREF9 , which has 10,000 active users with text descriptions and their collaboration links. Summary statistics of these three datasets are summarized in Table . Pre-processing protocols from prior studies are used for data preparation BIBREF10 , BIBREF34 , BIBREF9 . For quantitative evaluation, we tested our model on the following tasks: ( INLINEFORM0 ) Link prediction, where we deliberately mask out a portion of the edges to see if the embedding learned from the remaining edges can be used to accurately predict the missing edges. ( INLINEFORM1 ) Multi-label node classification, where we use the learned embedding to predict the labels associated with each node. Note that the label information is not used in our embedding. We also carried out ablation study to identify the gains. In addition to the quantitative results, we also visualized the embedding and the attention matrices to qualitatively verify our hypotheses. For the link prediction task, we adopt the area under the curve (AUC) score to evaluate the performance, AUC is employed to measure the probability that vertices in existing edges are more similar than those in the nonexistent edge. For each training ratio, the experiment is executed 10 times and the mean AUC scores are reported, where higher AUC indicates better performance. For multi-label classification, we evaluate the performance with Macro-F1 scores. The experiment for each training ratio is also executed 10 times and the average Macro-F1 scores are reported, where a higher value indicates better performance. To demonstrate the effectiveness of the proposed solutions, we evaluated our model along with the following strong baselines. ( INLINEFORM0 ) Topology only embeddings: MMB BIBREF45 , DeepWalk BIBREF1 , LINE BIBREF33 , Node2vec BIBREF46 . ( INLINEFORM1 ) Joint embedding of topology & text: Naive combination, TADW BIBREF5 , CENE BIBREF6 , CANE BIBREF9 , WANE BIBREF10 , DMTE BIBREF34 . A brief summary of these competing models is provided in the Supplementary Material (SM). Results We consider two variants of our model, denoted as GANE-OT and GANE-AP. GANE-OT employs the most basic OT-based attention model, specifically, global word-by-word alignment model; while GANE-AP additionally uses a one-layer convolutional neural network for the attention parsing. Detailed experimental setups are described in the SM. Tables and summarize the results from the link-prediction experiments on all three datasets, where a different ratio of edges are used for training. Results from models other than GANE are collected from BIBREF9 , BIBREF10 and BIBREF34 . We have also repeated these experiments on our own, and the results are consistent with the ones reported. Note that BIBREF34 did not report results on DMTE. Both GANE variants consistently outperform competing solutions. In the low-training-sample regime our solutions lead by a large margin, and the performance gap closes as the number of training samples increases. This indicates that our OT-based mutual attention framework can yield more informative textual representations than other methods. Note that GANE-AP delivers better results compared with GANE-OT, suggesting the attention parsing mechanism can further improve the low-level mutual attention matrix. More results on Cora and Hepth are provided in the SM. To further evaluate the effectiveness of our model, we consider multi-label vertex classification. Following the setup described in BIBREF9 , we first computed all context-aware embeddings. Then we averaged over each node's context-aware embeddings with all other connected nodes, to obtain a global embedding for each node, i.e., INLINEFORM0 , where INLINEFORM1 denotes the degree of node INLINEFORM2 . A linear SVM is employed, instead of a sophisticated deep classifier, to predict the label attribute of a node. We randomly sample a portion of labeled vertices with embeddings ( INLINEFORM3 ) to train the classifier, using the rest of the nodes to evaluate prediction accuracy. We compare our results with those from other state-of-the-art models in Table . The GANE models delivered better results compared with their counterparts, lending strong evidence that the OT attention and attention parsing mechanism promise to capture more meaningful representations. We further explore the effect of INLINEFORM0 -gram length in our model (i.e., the filter size for the covolutional layers used by the attention parsing module). In Figure FIGREF39 we plot the AUC scores for link prediction on the Cora dataset against varying INLINEFORM1 -gram length. The performance peaked around length 20, then starts to drop, indicating a moderate attention span is more preferable. Similar results are observed on other datasets (results not shown). Experimental details on the ablation study can be found in the SM. Qualitative Analysis We employed t-SNE BIBREF47 to project the network embeddings for the Cora dataset in a two-dimensional space using GANE-OT, with each node color coded according to its label. As shown in Figure FIGREF40 , papers clustered together belong to the same category, with the clusters well-separated from each other in the network embedding space. Note that our network embeddings are trained without any label information. Together with the label classification results, this implies our model is capable of extracting meaningful information from both context and network topological. To verify that our OT-based attention mechanism indeed produces sparse attention scores, we visualized the OT attention matrices and compared them with those simarlity-based attention matrices (e.g., WANE). Figure FIGREF44 plots one typical example. Our OT solver returns a sparse attention matrix, while dot-product-based WANE attention is effectively dense. This underscores the effectiveness of OT-based attention in terms of noise suppression. Conclusion We have proposed a novel and principled mutual-attention framework based on optimal transport (OT). Compared with existing solutions, the attention mechanisms employed by our GANE model enjoys the following benefits: (i) it is naturally sparse and self-normalized, (ii) it is a global sequence matching scheme, and (iii) it can capture long-term interactions between two sentences. These claims are supported by experimental evidence from link prediction and multi-label vertex classification. Looking forward, our attention mechanism can also be applied to tasks such as relational networks BIBREF15 , natural language inference BIBREF11 , and QA systems BIBREF48 . Acknowledgments This research was supported in part by DARPA, DOE, NIH, ONR and NSF. Appendix Competing models Topology only embeddings Mixed Membership Stochastic Blockmodel (MMB) BIBREF45 : a graphical model for relational data, each node randomly select a different "topic" when forming an edge. DeepWalk BIBREF1 : executes truncated random walks on the graph, and by treating nodes as tokens and random walks as natural language sequences, the node embedding are obtained using the SkipGram model BIBREF27 . Node2vec BIBREF46 : a variant of DeepWalk by executing biased random walks to explore the neighborhood (e.g., Breadth-first or Depth-first sampling). Large-scale Information Network Embedding (LINE) BIBREF33 : scalable network embedding scheme via maximizing the joint and conditional likelihoods. Joint embedding of topology & text Naive combination BIBREF9 : direct combination of the structure embedding and text embedding that best predicts edges. Text-Associated DeepWalk (TADW) BIBREF5 : reformulating embedding as a matrix factorization problem, and fused text-embedding into the solution. Content-Enhanced Network Embedding (CENE) BIBREF6 : treats texts as a special kind of nodes. Context-Aware Network Embedding (CANE) BIBREF9 : decompose the embedding into context-free and context-dependent part, use mutual attention to address the context-dependent embedding. Word-Alignment-based Network Embedding (WANE) BIBREF10 : Using fine-grained alignment to improve context-aware embedding. Diffusion Maps for Textual network Embedding (DMTE) BIBREF34 : using truncated diffusion maps to improve the context-free part embedding in CANE. Complete Link prediction results on Cora and Hepth The complete results for Cora and Hepth are listed in Tables and . Results from models other than GANE are collected from BIBREF9 , BIBREF10 , BIBREF34 . We have also repeated these experiments on our own, the results are consistent with the ones reported. Note that DMTE did not report results on Hepth BIBREF34 . Negative sampling approximation In this section we provide a quick justification for the negative sampling approximation. To this end, we first briefly review noise contrastive estimation (NCE) and how it connects to maximal likelihood estimation, then we establish the link to negative sampling. Interested readers are referred to BIBREF50 for a more thorough discussion on this topic. Noise contrastive estimation. NCE seeks to learn the parameters of a likelihood model INLINEFORM0 by optimizing the following discriminative objective: J() = uipd[p(y=1|ui,v) - K Eu'pn [p(y=0|u,v)]], where INLINEFORM1 is the label of whether INLINEFORM2 comes from the data distribution INLINEFORM3 or the tractable noise distribution INLINEFORM4 , and INLINEFORM5 is the context. Using the Monte Carlo estimator for the second term gives us J() = uipd[p(y=1|ui,v) - k=1K [p(y=0|uk,v)]], uk iid pn. Since the goal of INLINEFORM6 is to predict the label of a sample from a mixture distribution with INLINEFORM7 from INLINEFORM8 and INLINEFORM9 from INLINEFORM10 , plugging the model likelihood and noise likelihood into the label likelihood gives us p(y=1;u,v) = p(u|v)p(u|v) + K pn(u|v), p(y=0;u,v) = K pn(u|v)p(u|v) + K pn(u|v). Recall INLINEFORM11 takes the following softmax form DISPLAYFORM0 NCE treats INLINEFORM12 as an learnable parameter and optimized along with INLINEFORM13 . One key observation is that, in practice, one can safely clamp INLINEFORM14 to 1, and the NCE learned model ( INLINEFORM15 ) will self-normalize in the sense that INLINEFORM16 . As such, one can simply plug INLINEFORM17 into the above objective. Another key result is that, as INLINEFORM18 , the gradient of NCE objective recovers the gradient of softmax objective INLINEFORM19 BIBREF49 . Negative sampling as NCE. If we set INLINEFORM20 and let INLINEFORM21 be the uniform distribution on INLINEFORM22 , we have DISPLAYFORM1 where INLINEFORM23 is the sigmoid function. Plugging this back to the INLINEFORM24 covers the negative sampling objective Eqn (6) used in the paper. Combined with the discussion above, we know Eqn (6) provides a valid approximation to the INLINEFORM25 -likelihood in terms of the gradient directions, when INLINEFORM26 is sufficiently large. In this study, we use INLINEFORM27 negative sample for computational efficiency. Using more samples did not significantly improve our results (data not shown). Experiment Setup We use the same codebase from CANE BIBREF9 . The implementation is based on TensorFlow, all experiments are exectuted on a single NVIDIA TITAN X GPU. We set embedding dimension to INLINEFORM28 for all our experiments. To conduct a fair comparison with the baseline models, we follow the experiment setup from BIBREF10 . For all experiments, we set word embedding dimension as 100 trained from scratch. We train the model with Adam optimizer and set learning rate INLINEFORM29 . For GANE-AP model, we use best filte size INLINEFORM30 for convolution from our abalation study. Ablation study setup To test how the INLINEFORM31 -gram length affect our GANE-AP model performance, we re-run our model with different choices of INLINEFORM32 -gram length, namely, the window size in convolutional layer. Each experiment is repeated for 10 times and we report the averaged results to eliminate statistical fluctuations.
Yes
70f9358dc01fd2db01a6b165e0b4e83e4a9141a7
70f9358dc01fd2db01a6b165e0b4e83e4a9141a7_0
Q: Which other embeddings do they compare against? Text: Introduction When performing network embedding, one maps network nodes into vector representations that reside in a low-dimensional latent space. Such techniques seek to encode topological information of the network into the embedding, such as affinity BIBREF0 , local interactions (e.g, local neighborhoods) BIBREF1 , and high-level properties such as community structure BIBREF2 . Relative to classical network-representation learning schemes BIBREF3 , network embeddings provide a more fine-grained representation that can be easily repurposed for other downstream applications (e.g., node classification, link prediction, content recommendation and anomaly detection). For real-world networks, one naturally may have access to rich side information about each node. Of particular interest are textual networks, where the side information comes in the form of natural language sequences BIBREF4 . For example, user profiles or their online posts on social networks (e.g., Facebook, Twitter), and documents in citation networks (e.g., Cora, arXiv). The integration of text information promises to significantly improve embeddings derived solely from the noisy, sparse edge representations BIBREF5 . Recent work has started to explore the joint embedding of network nodes and the associated text for abstracting more informative representations. BIBREF5 reformulated DeepWalk embedding as a matrix factorization problem, and fused text-embedding into the solution, while BIBREF6 augmented the network with documents as auxiliary nodes. Apart from direct embedding of the text content, one can first model the topics of the associated text BIBREF7 and then supply the predicted labels to facilitate embedding BIBREF8 . Many important downstream applications of network embeddings are context-dependent, since a static vector representation of the nodes adapts to the changing context less effectively BIBREF9 . For example, the interactions between social network users are context-dependent (e.g., family, work, interests), and contextualized user profiling can promote the specificity of recommendation systems. This motivates context-aware embedding techniques, such as CANE BIBREF9 , where the vector embedding dynamically depends on the context. For textual networks, the associated texts are natural candidates for context. CANE introduced a simple mutual attention weighting mechanism to derive context-aware dynamic embeddings for link prediction. Following the CANE setup, WANE BIBREF10 further improved the contextualized embedding, by considering fine-grained text alignment. Despite the promising results reported thus far, we identify three major limitations of existing context-aware network embedding solutions. First, mutual (or cross) attentions are computed from pairwise similarities between local text embeddings (word/phrase matching), whereas global sequence-level modeling is known to be more favorable across a wide range of NLP tasks BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Second, related to the above point, low-level affinity scores are directly used as mutual attention without considering any high-level parsing. Such an over-simplified operation denies desirable features, such as noise suppression and relational inference BIBREF15 , thereby compromising model performance. Third, mutual attention based on common similarity measures (e.g., cosine similarity) typically yields dense attention matrices, while psychological and computational evidence suggests a sparse attention mechanism functions more effectively BIBREF16 , BIBREF17 . Thus such naive similarity-based approaches can be suboptimal, since they are more likely to incorporate irrelevant word/phrase matching. This work represents an attempt to improve context-aware textual network embedding, by addressing the above issues. Our contributions include: ( INLINEFORM0 ) We present a principled and more-general formulation of the network embedding problem, under reproducing kernel Hilbert spaces (RKHS) learning; this formulation clarifies aspects of the existing literature and provides a flexible framework for future extensions. ( INLINEFORM1 ) A novel global sequence-level matching scheme is proposed, based on optimal transport, which matches key concepts between text sequences in a sparse attentive manner. ( INLINEFORM2 ) We develop a high-level attention-parsing mechanism that operates on top of low-level attention, which is capable of capturing long-term interactions and allows relational inference for better contextualization. We term our model Global Attention Network Embedding (GANE). To validate the effectiveness of GANE, we benchmarked our models against state-of-the-art counterparts on multiple datasets. Our models consistently outperform competing methods. Problem setup We introduce basic notation and definitions used in this work. Proposed Method Model framework overview To capture both the topological information (network structure INLINEFORM0 ) and the semantic information (text content INLINEFORM1 ) in the textual network embedding, we explicitly model two types of embeddings for each node INLINEFORM2 : ( INLINEFORM3 ) the topological embedding INLINEFORM4 , and ( INLINEFORM5 ) the semantic embedding INLINEFORM6 . The final embedding is constructed by concatenating the topological and semantic embeddings, i.e., INLINEFORM7 . We consider the topological embedding INLINEFORM8 as a static property of the node, fixed regardless of the context. On the other hand, the semantic embedding INLINEFORM9 dynamically depends on the context, which is the focus of this study. Motivated by the work of BIBREF9 , we consider the following probabilistic objective to train the network embeddings: DISPLAYFORM0 where INLINEFORM0 represents sampled edges from the network and INLINEFORM1 is the collection of model parameters. The edge loss INLINEFORM2 is given by the cross entropy DISPLAYFORM0 where INLINEFORM0 denotes the conditional likelihood of observing a (weighted) link between nodes INLINEFORM1 and INLINEFORM2 , with the latter serving as the context. More specifically, DISPLAYFORM0 where INLINEFORM0 is the normalizing constant and INLINEFORM1 is an inner product operation, to be defined momentarily. Note here we have suppressed the dependency on INLINEFORM2 to simplify notation. To capture both the topological and semantic information, along with their interactions, we propose to use the following decomposition for our inner product term: DISPLAYFORM0 Here we use INLINEFORM0 to denote the inner product evaluation between the two feature embeddings INLINEFORM1 and INLINEFORM2 , which can be defined by a semi-positive-definite kernel function INLINEFORM3 BIBREF26 , e.g., Euclidean kernel, Gaussian RBF, IMQ kernel, etc. Note that for INLINEFORM4 , INLINEFORM5 and INLINEFORM6 do not reside on the same feature space. As such, embeddings are first mapped to the same feature space for inner product evaluation. In this study, we use the Euclidean kernel INLINEFORM0 for inner product evaluation with INLINEFORM0 , and linear mapping INLINEFORM0 for feature space realignment with INLINEFORM0 . Here INLINEFORM1 is a trainable parameter, and throughout this paper we omit the bias terms in linear maps to avoid notational clutter. Note that our solution differs from existing network-embedding models in that: ( INLINEFORM0 ) our objective is a principled likelihood loss, while prior works heuristically combine the losses of four different models BIBREF9 , which may fail to capture the non-trivial interactions between the fixed and dynamic embeddings; and ( INLINEFORM1 ) we present a formal derivation of network embedding in a reproducing kernel Hilbert space. Direct optimization of ( EQREF9 ) requires summing over all nodes in the network, which can be computationally infeasible for large-scale networks. To alleviate this issue, we consider other more computationally efficient surrogate objectives. In particular, we adopt the negative sampling approach BIBREF27 , which replaces the bottleneck Softmax with a more tractable approximation given by DISPLAYFORM0 where INLINEFORM0 is the sigmoid function, and INLINEFORM1 is a noise distribution over the nodes. Negative sampling can be considered as a special variant of noise contrastive estimation BIBREF28 , which seeks to recover the ground-truth likelihood by contrasting data samples with noise samples, thereby bypassing the need to compute the normalizing constant. As the number of noise samples INLINEFORM2 goes to infinity, this approximation becomes exact BIBREF29 . Following the practice of BIBREF27 , we set our noise distribution to INLINEFORM3 , where INLINEFORM4 denotes the out-degree of node INLINEFORM5 . We argue that a key to the context-aware network embedding is the design of an effective attention mechanism, which cross-matches the relevant content between the node's associated text and the context. Over-simplified dot-product attention limits the potential of existing textual network embedding schemes. In the following sections, we present two novel, efficient attention designs that fulfill the desiderata listed in our Introduction. Our discussion follows the setup used in CANE BIBREF9 and WANE BIBREF10 , where the text from the interacting node is used as the context. Generalization to other forms of context is straightforward. Optimal-transport-based matching We first consider reformulating content matching as an optimal transport problem, and then re-purpose the transport plan as our attention score to aggregate context-dependent information. More specifically, we see a node's text and context as two (discrete) distributions over the content space. Related content will be matched in the sense that they yield a higher weight in the optimal transport plan INLINEFORM0 . The following two properties make the optimal transport plan more appealing for use as attention score. ( INLINEFORM1 ) Sparsity: when solved exactly, INLINEFORM2 is a sparse matrix with at most INLINEFORM3 non-zero elements, where INLINEFORM4 is the number of contents ( BIBREF30 , INLINEFORM5 ); ( INLINEFORM6 ) Self-normalized: row-sum and column-sum equal the respective marginal distributions. Implementation-wise, we first feed embedded text sequence INLINEFORM0 and context sequence INLINEFORM1 into our OT solver to compute the OT plan, DISPLAYFORM0 Note that here we treat pre-embedded sequence INLINEFORM0 as INLINEFORM1 point masses in the feature space, each with weight INLINEFORM2 , and similarly for INLINEFORM3 . Next we “transport” the semantic content from context INLINEFORM4 according to the estimated OT plan with matrix multiplication DISPLAYFORM0 where we have treated INLINEFORM0 as a INLINEFORM1 matrix. Intuitively, this operation aligns the context with the target text sequence via averaging the context semantic embeddings with respect to the OT plan for each content element in INLINEFORM2 . To finalize the contextualized embedding, we aggregate the information from both INLINEFORM3 and the aligned INLINEFORM4 with an operator INLINEFORM5 , DISPLAYFORM0 In this case, we practice the following simple aggregation strategy: first concatenate INLINEFORM0 and the aligned INLINEFORM1 along the feature dimension, and then take max-pooling along the temporal dimension to reduce the feature vector into a INLINEFORM2 vector, followed by a linear mapping to project the embedding vector to the desired dimensionality. Attention parsing Direct application of attention scores based on a low-level similarity-based matching criteria (e.g., dot-product attention) can be problematic in a number of ways: ( INLINEFORM0 ) low-level attention scores can be noisy (i.e., spurious matchings), and ( INLINEFORM1 ) similarity-matching does not allow relational inference. To better understand these points, consider the following cases. For ( INLINEFORM2 ), if the sequence embeddings used do not explicitly address the syntactic structure of the text, a relatively dense attention score matrix can be expected. For ( INLINEFORM3 ), consider the case when the context is a query, and the matching appears as a cue in the node's text data; then the information needed is actually in the vicinity rather than the exact matching location (e.g., shifted a few steps ahead). Inspired by the work of BIBREF31 , we propose a new mechanism called attention parsing to address the aforementioned issues. As the name suggests, attention parsing re-calibrates the raw low-level attention scores to better integrate the information. To this end, we conceptually treat the raw attention matrix INLINEFORM0 as a two-dimensional image and apply convolutional filters to it: DISPLAYFORM0 where INLINEFORM0 denotes the filter banks with INLINEFORM1 and INLINEFORM2 respectively as window sizes and channel number. We can stack more convolutional layers, break sequence embedding dimensions to allow multi-group (channel) low-level attention as input, or introduce more-sophisticated model architectures (e.g., ResNet BIBREF32 , Transformer BIBREF18 , etc.) to enhance our model. For now, we focus on the simplest model described above, for the sake of demonstration. With INLINEFORM0 as the high-level representation of attention, our next step is to reduce it to a weight vector to align information from the context INLINEFORM1 . We apply a max-pooling operation with respect to the context dimension, followed by a linear map to get the logits INLINEFORM2 of the weights DISPLAYFORM0 where INLINEFORM0 is the projection matrix. Then the parsed attention weight INLINEFORM1 is obtained by DISPLAYFORM0 which is used to compute the aligned context embedding DISPLAYFORM0 Note that here we compute a globally aligned context embedding vector INLINEFORM0 , rather than one for each location in INLINEFORM1 as described in the last section ( INLINEFORM2 ). In the subsequent aggregation operation, INLINEFORM3 is broadcasted to all the locations in INLINEFORM4 . We call this global alignment, to distinguish it from the local alignment strategy described in the last section. Both alignment strategies have their respective merits, and in practice they can be directly combined to produce the final context-aware embedding. Related Work Experiments Experimental setup We consider three benchmark datasets: ( INLINEFORM0 ) Cora, a paper citation network with text information, built by BIBREF44 . We prune the dataset so that it only has papers on the topic of machine learning. ( INLINEFORM1 ) Hepth, a paper citation network from Arxiv on high energy physics theory, with paper abstracts as text information. ( INLINEFORM2 ) Zhihu, a Q&A network dataset constructed by BIBREF9 , which has 10,000 active users with text descriptions and their collaboration links. Summary statistics of these three datasets are summarized in Table . Pre-processing protocols from prior studies are used for data preparation BIBREF10 , BIBREF34 , BIBREF9 . For quantitative evaluation, we tested our model on the following tasks: ( INLINEFORM0 ) Link prediction, where we deliberately mask out a portion of the edges to see if the embedding learned from the remaining edges can be used to accurately predict the missing edges. ( INLINEFORM1 ) Multi-label node classification, where we use the learned embedding to predict the labels associated with each node. Note that the label information is not used in our embedding. We also carried out ablation study to identify the gains. In addition to the quantitative results, we also visualized the embedding and the attention matrices to qualitatively verify our hypotheses. For the link prediction task, we adopt the area under the curve (AUC) score to evaluate the performance, AUC is employed to measure the probability that vertices in existing edges are more similar than those in the nonexistent edge. For each training ratio, the experiment is executed 10 times and the mean AUC scores are reported, where higher AUC indicates better performance. For multi-label classification, we evaluate the performance with Macro-F1 scores. The experiment for each training ratio is also executed 10 times and the average Macro-F1 scores are reported, where a higher value indicates better performance. To demonstrate the effectiveness of the proposed solutions, we evaluated our model along with the following strong baselines. ( INLINEFORM0 ) Topology only embeddings: MMB BIBREF45 , DeepWalk BIBREF1 , LINE BIBREF33 , Node2vec BIBREF46 . ( INLINEFORM1 ) Joint embedding of topology & text: Naive combination, TADW BIBREF5 , CENE BIBREF6 , CANE BIBREF9 , WANE BIBREF10 , DMTE BIBREF34 . A brief summary of these competing models is provided in the Supplementary Material (SM). Results We consider two variants of our model, denoted as GANE-OT and GANE-AP. GANE-OT employs the most basic OT-based attention model, specifically, global word-by-word alignment model; while GANE-AP additionally uses a one-layer convolutional neural network for the attention parsing. Detailed experimental setups are described in the SM. Tables and summarize the results from the link-prediction experiments on all three datasets, where a different ratio of edges are used for training. Results from models other than GANE are collected from BIBREF9 , BIBREF10 and BIBREF34 . We have also repeated these experiments on our own, and the results are consistent with the ones reported. Note that BIBREF34 did not report results on DMTE. Both GANE variants consistently outperform competing solutions. In the low-training-sample regime our solutions lead by a large margin, and the performance gap closes as the number of training samples increases. This indicates that our OT-based mutual attention framework can yield more informative textual representations than other methods. Note that GANE-AP delivers better results compared with GANE-OT, suggesting the attention parsing mechanism can further improve the low-level mutual attention matrix. More results on Cora and Hepth are provided in the SM. To further evaluate the effectiveness of our model, we consider multi-label vertex classification. Following the setup described in BIBREF9 , we first computed all context-aware embeddings. Then we averaged over each node's context-aware embeddings with all other connected nodes, to obtain a global embedding for each node, i.e., INLINEFORM0 , where INLINEFORM1 denotes the degree of node INLINEFORM2 . A linear SVM is employed, instead of a sophisticated deep classifier, to predict the label attribute of a node. We randomly sample a portion of labeled vertices with embeddings ( INLINEFORM3 ) to train the classifier, using the rest of the nodes to evaluate prediction accuracy. We compare our results with those from other state-of-the-art models in Table . The GANE models delivered better results compared with their counterparts, lending strong evidence that the OT attention and attention parsing mechanism promise to capture more meaningful representations. We further explore the effect of INLINEFORM0 -gram length in our model (i.e., the filter size for the covolutional layers used by the attention parsing module). In Figure FIGREF39 we plot the AUC scores for link prediction on the Cora dataset against varying INLINEFORM1 -gram length. The performance peaked around length 20, then starts to drop, indicating a moderate attention span is more preferable. Similar results are observed on other datasets (results not shown). Experimental details on the ablation study can be found in the SM. Qualitative Analysis We employed t-SNE BIBREF47 to project the network embeddings for the Cora dataset in a two-dimensional space using GANE-OT, with each node color coded according to its label. As shown in Figure FIGREF40 , papers clustered together belong to the same category, with the clusters well-separated from each other in the network embedding space. Note that our network embeddings are trained without any label information. Together with the label classification results, this implies our model is capable of extracting meaningful information from both context and network topological. To verify that our OT-based attention mechanism indeed produces sparse attention scores, we visualized the OT attention matrices and compared them with those simarlity-based attention matrices (e.g., WANE). Figure FIGREF44 plots one typical example. Our OT solver returns a sparse attention matrix, while dot-product-based WANE attention is effectively dense. This underscores the effectiveness of OT-based attention in terms of noise suppression. Conclusion We have proposed a novel and principled mutual-attention framework based on optimal transport (OT). Compared with existing solutions, the attention mechanisms employed by our GANE model enjoys the following benefits: (i) it is naturally sparse and self-normalized, (ii) it is a global sequence matching scheme, and (iii) it can capture long-term interactions between two sentences. These claims are supported by experimental evidence from link prediction and multi-label vertex classification. Looking forward, our attention mechanism can also be applied to tasks such as relational networks BIBREF15 , natural language inference BIBREF11 , and QA systems BIBREF48 . Acknowledgments This research was supported in part by DARPA, DOE, NIH, ONR and NSF. Appendix Competing models Topology only embeddings Mixed Membership Stochastic Blockmodel (MMB) BIBREF45 : a graphical model for relational data, each node randomly select a different "topic" when forming an edge. DeepWalk BIBREF1 : executes truncated random walks on the graph, and by treating nodes as tokens and random walks as natural language sequences, the node embedding are obtained using the SkipGram model BIBREF27 . Node2vec BIBREF46 : a variant of DeepWalk by executing biased random walks to explore the neighborhood (e.g., Breadth-first or Depth-first sampling). Large-scale Information Network Embedding (LINE) BIBREF33 : scalable network embedding scheme via maximizing the joint and conditional likelihoods. Joint embedding of topology & text Naive combination BIBREF9 : direct combination of the structure embedding and text embedding that best predicts edges. Text-Associated DeepWalk (TADW) BIBREF5 : reformulating embedding as a matrix factorization problem, and fused text-embedding into the solution. Content-Enhanced Network Embedding (CENE) BIBREF6 : treats texts as a special kind of nodes. Context-Aware Network Embedding (CANE) BIBREF9 : decompose the embedding into context-free and context-dependent part, use mutual attention to address the context-dependent embedding. Word-Alignment-based Network Embedding (WANE) BIBREF10 : Using fine-grained alignment to improve context-aware embedding. Diffusion Maps for Textual network Embedding (DMTE) BIBREF34 : using truncated diffusion maps to improve the context-free part embedding in CANE. Complete Link prediction results on Cora and Hepth The complete results for Cora and Hepth are listed in Tables and . Results from models other than GANE are collected from BIBREF9 , BIBREF10 , BIBREF34 . We have also repeated these experiments on our own, the results are consistent with the ones reported. Note that DMTE did not report results on Hepth BIBREF34 . Negative sampling approximation In this section we provide a quick justification for the negative sampling approximation. To this end, we first briefly review noise contrastive estimation (NCE) and how it connects to maximal likelihood estimation, then we establish the link to negative sampling. Interested readers are referred to BIBREF50 for a more thorough discussion on this topic. Noise contrastive estimation. NCE seeks to learn the parameters of a likelihood model INLINEFORM0 by optimizing the following discriminative objective: J() = uipd[p(y=1|ui,v) - K Eu'pn [p(y=0|u,v)]], where INLINEFORM1 is the label of whether INLINEFORM2 comes from the data distribution INLINEFORM3 or the tractable noise distribution INLINEFORM4 , and INLINEFORM5 is the context. Using the Monte Carlo estimator for the second term gives us J() = uipd[p(y=1|ui,v) - k=1K [p(y=0|uk,v)]], uk iid pn. Since the goal of INLINEFORM6 is to predict the label of a sample from a mixture distribution with INLINEFORM7 from INLINEFORM8 and INLINEFORM9 from INLINEFORM10 , plugging the model likelihood and noise likelihood into the label likelihood gives us p(y=1;u,v) = p(u|v)p(u|v) + K pn(u|v), p(y=0;u,v) = K pn(u|v)p(u|v) + K pn(u|v). Recall INLINEFORM11 takes the following softmax form DISPLAYFORM0 NCE treats INLINEFORM12 as an learnable parameter and optimized along with INLINEFORM13 . One key observation is that, in practice, one can safely clamp INLINEFORM14 to 1, and the NCE learned model ( INLINEFORM15 ) will self-normalize in the sense that INLINEFORM16 . As such, one can simply plug INLINEFORM17 into the above objective. Another key result is that, as INLINEFORM18 , the gradient of NCE objective recovers the gradient of softmax objective INLINEFORM19 BIBREF49 . Negative sampling as NCE. If we set INLINEFORM20 and let INLINEFORM21 be the uniform distribution on INLINEFORM22 , we have DISPLAYFORM1 where INLINEFORM23 is the sigmoid function. Plugging this back to the INLINEFORM24 covers the negative sampling objective Eqn (6) used in the paper. Combined with the discussion above, we know Eqn (6) provides a valid approximation to the INLINEFORM25 -likelihood in terms of the gradient directions, when INLINEFORM26 is sufficiently large. In this study, we use INLINEFORM27 negative sample for computational efficiency. Using more samples did not significantly improve our results (data not shown). Experiment Setup We use the same codebase from CANE BIBREF9 . The implementation is based on TensorFlow, all experiments are exectuted on a single NVIDIA TITAN X GPU. We set embedding dimension to INLINEFORM28 for all our experiments. To conduct a fair comparison with the baseline models, we follow the experiment setup from BIBREF10 . For all experiments, we set word embedding dimension as 100 trained from scratch. We train the model with Adam optimizer and set learning rate INLINEFORM29 . For GANE-AP model, we use best filte size INLINEFORM30 for convolution from our abalation study. Ablation study setup To test how the INLINEFORM31 -gram length affect our GANE-AP model performance, we re-run our model with different choices of INLINEFORM32 -gram length, namely, the window size in convolutional layer. Each experiment is repeated for 10 times and we report the averaged results to eliminate statistical fluctuations.
MMB, DeepWalk, LINE, Node2vec, TADW, CENE, CANE, WANE, DMTE
4a4616e1a9807f32cca9b92ab05e65b05c2a1bf5
4a4616e1a9807f32cca9b92ab05e65b05c2a1bf5_0
Q: What were the sizes of the test sets? Text: Introduction Preventable adverse drug reactions (ADRs) introduce a growing concern in the modern healthcare system as they represent a large fraction of hospital admissions and play a significant role in increased health care costs BIBREF0 . Based on a study examining hospital admission data, it is estimated that approximately three to four percent of hospital admissions are caused by adverse events BIBREF1 ; moreover, it is estimated that between 53% and 58% of these events were due to medical errors BIBREF2 (and are therefore considered preventable). Such preventable adverse events have been cited as the eighth leading cause of death in the U.S., with an estimated fatality rate of between 44,000 and 98,000 each year BIBREF3 . As drug-drug interactions (DDIs) may lead to preventable ADRs, being able to extract DDIs from structured product labeling (SPL) documents for prescription drugs is an important effort toward effective dissemination of drug safety information. The Text Analysis Conference (TAC) is a series of workshops aimed at encouraging research in natural language processing (NLP) and related applications by providing large test collections along with a standard evaluation procedure. The Drug-Drug Interaction Extraction from Drug Labels track of TAC 2018 BIBREF4 , organized by the U.S. Food and Drug Administration (FDA) and U.S. National Library of Medicine (NLM), is established with the goal of transforming the contents of SPLs into a machine-readable format with linkage to standard terminologies. We focus on the first two tasks of the DDI track involving named entity recognition (NER) and relation extraction (RE). Task 1 is focused on identifying mentions in the text corresponding to precipitants, interaction triggers, and interaction effects. Precipitants are defined as substances, drugs, or a drug class involved in an interaction. Task 2 is focused on identifying sentence-level interactions; concretely, the goal is to identify the interacting precipitant, the type of the interaction, and outcome of the interaction. The interaction outcome depends on the interaction type as follows. Pharmacodynamic (PD) interactions are associated with a specified effect corresponding to a span within the text that describes the outcome of the interaction. Naturally, it is possible for a precipitant to be involved in multiple PD interactions. Pharmacokinetic (PK) interactions are associated with a label from a fixed vocabulary of National Cancer Institute (NCI) Thesaurus codes indicating various levels of increase/decrease in functional measurements. For example, consider the sentence: “There is evidence that treatment with phenytoin leads to to decrease intestinal absorption of furosemide, and consequently to lower peak serum furosemide concentrations.” Here, phenytoin is involved in a PK interaction with the label drug, furosemide, and the type of PK interaction is indicated by the NCI Thesaurus code C54615 which describes a decrease in the maximum serum concentration (C INLINEFORM0 ) of the label drug. Lastly, unspecified (UN) interactions are interactions with an outcome that is not explicitly stated in the text and usually indicated through cautionary statements. Figure FIGREF1 features a simple example of a PD interaction that is extracted from the drug label for Adenocard, where the precipitant is digitalis and the effect is “ventricular fibrillation.” Materials and Methods Herein, we describe the training and testing data involved in this task and the metrics used for evaluation. In Section SECREF5 , we describe our modeling approach, our deep learning architecture, and our training procedure. Datasets Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or more sentences. Each sentence is annotated with a list of zero or more mentions and interactions. The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems. As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). We provide summary statistics about these datasets in Table TABREF3 . Test Set 1 closely resembles Training-22 with respect to the sections that are annotated. However, Test Set 1 is more sparse in the sense that there are more sentences per drug label (144 vs. 27), with a smaller proportion of those sentences having gold annotations (23% vs. 51%). Test Set 2 is unique in that it contains annotations from only two sections, namely DRUG INTERACTIONS and CLINICAL PHARMACOLOGY, the latter of which is not represented in Training-22 (nor Test Set 1). Lastly, Training-22, Test Set 1, and Test Set 2 all vary with respect to the distribution of interaction types, with Training-22, Test Set 1, and Test Set 2 containing a higher proportion of PD, UN, and PK interactions respectively. Evaluation Metrics We used the official evaluation metrics for NER and relation extraction based on the standard precision, recall, and F1 micro-averaged over exactly matched entity/relation annotations. For either task, there are two matching criteria: primary and relaxed. For entity recognition, relaxed matching considers only entity bounds while primary matching considers entity bounds as well as the type of the entity. For relation extraction, relaxed matching only considers precipitant drug (and their bounds) while primary matching comprehensively considers precipitant drugs and, for each, the corresponding interaction type and interaction outcome. As relation extraction evaluation takes into account the bounds of constituent entity predictions, relation extraction performance is heavily reliant on entity recognition performance. On the other hand, we note that while NER evaluation considers trigger mentions, triggers are ignored when evaluating relation extraction performance. Methodology We propose a multi-task learning framework for extracting drug-drug interactions from drug labels. The framework involves branching paths for each training objective (corresponding to sub-tasks) such that parameters of earlier layers (i.e., the context encoder) are shared. Since only drugs involved in an interaction (precipitants) are annotated in the ground truth, we model the task of precipitant recognition and interaction type prediction jointly. We accomplish this by reducing the problem to a sequence tagging problem via a novel NER tagging scheme. That is, for each precipitant drug, we additionally encode the associated interaction type. Hence, there are five possible tags: T for trigger, E for effects, and D, K, and U for precipitants with pharmacodynamic, pharmacokinetic, and unspecified interactions respectively. As a preprocesssing step, we identify the label drug in the sentence, if it is mentioned, and bind it to a generic entity token (e.g. “LABELDRUG”). We additionally account for label drug aliases, such as the generic version of a brand-name drug, and bind them to the same entity token. Table TABREF7 shows how the tagging scheme is applied to the simple example in Figure FIGREF1 . A drawback is that simplifying assumptions must be made that will hamper recall; e.g., we only consider non-overlapping mentions (more later). Once we have identified the precipitant offsets (as well as of triggers/effects) and the interaction type for each precipitant, we subsequently predict the outcome or consequence of the interaction (if any). To that end, we consider all entity spans annotated with K tags and assign them a label from a static vocabulary of 20 NCI concept codes corresponding to PK consequence (i.e., multiclass classification) based on sentence-context. Likewise, we consider all entity spans annotated with D tags and link them to mention spans annotated with E tags; we accomplish this via binary classification of all pairwise combinations. For entity spans annotated with U tags, no outcome prediction is made. Our proposed deep neural network is illustrated in Figure FIGREF8 . We utilize Bi-directional Long Short-Term Memory networks (Bi-LSTMs) and convolutional neural networks (CNNs) designed for natural language processing as building blocks for our architecture BIBREF6 , BIBREF7 . Entity recognition and outcome prediction share common parameters via a Bi-LSTM context encoder that composes a context representation at each timestep based on input words mapped to dense embeddings and character-CNN composed representations. We use the same character-CNN representation as described in a prior work BIBREF8 ; however, in this work, we omit the character type embedding. A Bi-LSTM component is used to annotate IOB tags for joint entity recognition and interaction type prediction (or, NER prediction) while a CNN with two separate dense output layers (one for PK and one for PD interactions) is used for outcome prediction. We consider NER prediction to be the main objective with outcome prediction playing a secondary role. When predicting outcome, the contextual input is arranged such that candidate entity (and effect) mentions are bound to generic tokens; the resulting representation is referred to as “entity-bound word embeddings” in Figure FIGREF8 . We denote INLINEFORM0 as an abstract function, representing a standard bi-directional recurrent neural network with LSTM units, where INLINEFORM1 is the number of input vector representations (e.g., word embeddings) in the sequence and INLINEFORM2 and INLINEFORM3 are the dimensionality of the input and output representations respectively. We similarity denote INLINEFORM4 to represent a standard CNN that maps an INLINEFORM5 matrix to a vector representation of length INLINEFORM6 , where INLINEFORM7 is a list of window (or kernel) sizes that are used in the convolution. Let the input be a sentence of length INLINEFORM0 represented as a matrix INLINEFORM1 , where each row corresponds to a word embedding of length INLINEFORM2 . Moreover, let INLINEFORM3 represent the word at position INLINEFORM4 of the sentence such that each of the INLINEFORM5 rows correspond to a character embedding of length INLINEFORM6 . The purpose of the context encoder is to encode each word of the input with surrounding linguistic features and long-distance dependency information. To that end, we employ the use of a Bi-LSTM network to encode S as a context matrix INLINEFORM7 where INLINEFORM8 is a hyper-parameter of the network. Concretely, DISPLAYFORM0 where INLINEFORM0 denotes the INLINEFORM1 row of INLINEFORM2 and INLINEFORM3 is the vector concatenation operator. Essentially, for each word, we compose character representations using a CNN with a window size of three and concatenate them to pre-trained word embeddings; we stack the concatenated vectors as rows of a new matrix that is ultimately fed as input to the Bi-LSTM context encoder. The INLINEFORM4 row of INLINEFORM5 , denoted as INLINEFORM6 , represents the entire context centered at the INLINEFORM7 word. As an implementation detail, we chose INLINEFORM8 and INLINEFORM9 to be the maximum sentence and word length (according to the training data) respectively and pad shorter examples with zero vectors. The network for the NER objective manifests as a stacked Bi-LSTM architecture when we consider both the context encoder and the entity recognition component. Borrowing from residual networks BIBREF9 , we re-inforce the input by concatenating word embeddings to the intermediate context vectors before feeding it to the second Bi-LSTM layer. Concretely, the final entity recognition matrix INLINEFORM0 is composed such that DISPLAYFORM0 The output at each position INLINEFORM0 is INLINEFORM1 where INLINEFORM0 is the INLINEFORM1 row of INLINEFORM2 and INLINEFORM3 and INLINEFORM4 are network parameters such that INLINEFORM5 denotes the number of possible IOB tags such as O, B-K, I-K and so on. In order to obtain a categorical distribution, we apply the SoftMax function to INLINEFORM6 such that INLINEFORM7 where INLINEFORM0 is the vector of probability estimates serving as a categorical distribution over INLINEFORM1 tags for the word at position INLINEFORM2 . We optimize by computing the standard categorical cross-entropy loss for each of the INLINEFORM3 individual tag predictions. The final loss to be optimized is the mean over all INLINEFORM4 individually-computed losses. A stacked Bi-LSTM architecture improves over a single Bi-LSTM architecture given its capacity to learn deep contextualized embeddings. While we showed that the stacked approach is better for this particular task in Section SECREF19 , it is not necessarily the case that a stacked approach is better in general. We offer an alternative explanation and motivation for using a stacked architecture for this particular problem based on our initial intuition as follows. First, we note that a standalone Bi-LSTM is not able to handle the inference aspect of NER, which entails learning IOB constraints. As an example, in the IOB encoding scheme, it is not possible for a I-D tag to immediately follow a B-E tag; in this way, the prediction of a tag is directly dependent on the prediction of neighboring tags. This inference aspect is typically handled by a linear-chain CRF. We believe that a stacked Bi-LSTM at least partially handles this aspect in the sense that the first Bi-LSTM (the context encoder) is given the opportunity to form independent preliminary decisions while the second Bi-LSTM is tasked with to making final decisions (based on preliminary ones) that are more globally consistent with respect to IOB constraints. To predict outcome, we construct a secondary branch in the network path that involves convolving over the word and context embeddings made available in earlier layers. We first define a relation representation INLINEFORM0 that is produced by convolving with window sizes 3, 4, and 5 over the context vectors concatenated to entity-bound versions of the original input; concretely, INLINEFORM1 where INLINEFORM0 is the entity-bound version of INLINEFORM1 . Based on this outcome representation, we compose two separate softmax outputs: one for PK interactions and one for PD interactions. Concretely, the output layers are INLINEFORM2 and INLINEFORM0 where INLINEFORM0 and INLINEFORM1 are probability estimates serving as a categorical distribution over the outcome label space for PD and PK respectively and INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 are parameters of the network. For PK, INLINEFORM6 given there are 20 possible NCI Thesaurus codes corresponding to PK outcomes. For PD, INLINEFORM7 as it is a binary classification problem to assess whether the precipitant and effect pair encoded by INLINEFORM8 are linked. We optimize using the standard categorical cross-entropy loss on both objectives. In NLM-180, there is no distinction between triggers and effects; moreover, PK effects are limited to coarse-grained (binary) labels corresponding to increase or decrease in function measurements. Hence, a direct mapping from NLM-180 to Training-22 is impossible. As a compromise, NLM-180 “triggers” were mapped to Training-22 triggers in the case of unspecified and PK interactions. For PD interactions, we instead mapped NLM-180 “triggers” to Training-22 effects, which we believe to be appropriate based on our manual analysis of the data. Since we do not have both trigger and effect for every PD interaction, we opted to ignore trigger mentions altogether in the case of PD interactions to avoid introducing mixed signals. While trigger recognition has no bearing on relation extraction performance, this policy has the effect of reducing the recall upperbound on NER by about 25% (more later on upperbound). To overcome the lack of fine-grained annotations for PK outcome in NLM-180, we deploy the well-known bootstrapping approach BIBREF10 to incrementally annotate NLM-180 PK outcomes using Training-22 annotations as seed examples. To mitigate the problem of semantic drift, in each bootstrap cycle, we re-annotated by hand predictions that were not consistent with the original NLM-180 coarse annotations (i.e., active learning BIBREF11 ). We train the three objective losses (NER, PK outcome, and PD outcome) in an interleaved fashion at the minibatch BIBREF12 level. We use word embeddings of size 200 pre-trained on the PubMed corpus BIBREF13 as input to the network; these are further modified during back-propagation. For the character-level CNN, we set the character embedding size to 24 with 50 filters over a window size of 3; the final character-CNN composition is therefore of length 50. For each Bi-LSTM, the hidden size is set to 100 such that context vectors are 200 in length. For outcome prediction, we used window sizes of 3, 4, and 5 with 50 filters per window size; the final vector representation for outcome prediction is therefore 150 in length. A held-out development set of 4 drug labels is used for tuning and validation. The models are trained for 30 epochs with check-pointing; only the check-point with the best performance on the development set is kept for testing. We dynamically set the mini-batch size INLINEFORM0 as a function of the number of examples INLINEFORM1 such that the number of training iterations is roughly 300 per epoch (and also constant regardless of training data size); concretely, INLINEFORM2 . As a form of regularization, we apply dropout BIBREF14 at a rate of 50% on the hidden representations immediately after a Bi-LSTM or CNN composition. The outcome objectives are trained such that the gradients of the context encoder weights are downscaled by an order of magnitude (i.e., one tenth) to encourage learning at the later layers. When learning on the NER objective – the main branch of the network – the gradients are not downscaled in the same manner. Moreover, when training on the NER objective, we upweight the loss penalty on “relation” tags (non-O tags) by a factor of 10, which forces the model to prioritize differentiation between different types of interactions over span segmentation. We additionally upweight the loss penalty by a factor of 3 on Training-22 examples compared to NLM-180 examples. We optimize using the Adam BIBREF15 optimization method. These hyper-parameters were tuned during initial experiments. Results and Discussion In this section, we present and discuss the results of our cross-validation experiments. We then describe the “runs” that were submitted as challenge entries and present our official challenge results. We discuss these results in Section SECREF28 . Validation Results We present the results of our initial experiments in Table TABREF20 . Evaluations were produced as as result of 11-fold cross-validation over Training-22 with two drug labels per fold. Instead of macro-averaging over folds, and thereby weighting each fold equally, we evaluate on the union of all 11 test-fold predictions. The upperbound in Table TABREF20 is produced by reducing Training-22 (with gold labels) to our sequence-tagging format and then reverting it back to the original official XML format. Lowered recall is mostly due to simplifying assumptions; e.g., we only consider non-overlapping mentions. For coordinated disjoint cases such as “X and Y inducers”, we only considered “Y inducers” in our simplifying assumption. Imperfect precision is due to discrepancies between the tokenization scheme used by our method and that used to produce gold annotations; this leads to the occasional mismatch in entity offsets during evaluation. Using a stacked Bi-LSTM trained on the original 22 training examples (Table TABREF20 ; row 1) as our baseline, we make the following observations. Incorporating NLM-180 resulted in a significant boost of more than 20 F1-points in relation extraction performance and more than 10 F1-points in NER performance (Table TABREF20 ; row 2), despite the lowered upperbound on NER recall as mentioned in Section SECREF5 . Adding character-CNN based word representations improved performance marginally, more so for NER than relation extraction (Table TABREF20 ; row 3). We also implemented several tweaks to the pre-processing and post-processing aspects of the model based on preliminary error analysis including (1) using drug class mentions (e.g., “diuretics”) as proxies if the drug label is not mentioned directly; (2) removing modifiers such as moderate, strong, and potent so that output conforms to official annotation guidelines; and (3) purging predicted mentions with only stopwords or generic terms such as “drugs” or “agents.” These tweaks improved performance by more than two F1-points across both metrics (Table TABREF20 ; row 4). Based on early experiments with simpler models tuned on relaxed matching (not shown in Table TABREF20 and not directly comparable to results displayed in Table TABREF20 ), we found that a stacked Bi-LSTM architecture improves over a single Bi-LSTM by approximately four F1-points on relation extraction (55.59% vs. 51.55% F1 tuned on the relaxed matching criteria). We moreover found that omitting word embeddings as input at the second Bi-LSTM results in worse performance at 52.91% F1. We also experimented with using Temporal Convolution Networks (TCNs) BIBREF16 as a “drop-in” replacement for Bi-LSTMs. Our attempts involved replacing only the second Bi-LSTM with a TCN (Table TABREF20 ; row 4) as well as replacing both Bi-LSTMs with TCNs (Table TABREF20 ; row 5). The results of these early experiments were not promising and further fine-tuning may be necessary for better performance. Official Test Results Our final system submission is based on a stacked Bi-LSTM network with character-CNNs trained on both Training-22 and NLM-180 (corresponding to row 4 of Table TABREF20 ). We submitted the following three runs based on this architecture: A single model. An ensemble over ten models each trained with randomly initialized weights and a random development split. Intuitively, models collectively “vote” on predicted annotations that are kept and annotations that are discarded. A unique annotation (entity or relation) has one vote for each time it appears in one of the ten model prediction sets. In terms of implementation, unique annotations are incrementally added (to the final prediction set) in order of descending vote count; subsequent annotations that conflict (i.e., overlap based on character offsets) with existing annotations are discarded. Hence, we loosely refer to this approach as “voting-based” ensembling. A single model with pre/post-processing rules to handle modifier coordinations; for example, “X and Y inducers” would be correctly identified as two distinct entities corresponding to “X inducers” and “Y inducers.” Here, we essentially encoded “X and Y inducers” as a single entity when training the NER objective; during test time, we use simple rules based on pattern matching to split the joint “entity” into its constituents. Eight teams participated in task 1 while four teams participated in task 2. We record the relative performance of our system (among others in the top 5) on the two official test sets in Table TABREF24 . For each team, we only display the performance of the best run for a particular test set. Methods are grouped by the data used for training and ranked in ascending order of primary relation extraction performance followed by entity recognition performance. We also included a single model trained solely on Training-22, that was not submitted, for comparison. Our voting-based ensemble performed best among the three systems submitted by our team on both NER and relation extraction. In the official challenge, this model placed second overall on both NER and relation extraction. Tang et al. BIBREF20 boasts the top performing system on both tasks. In addition to Training-22 and NLM-180, the team trained and validated their models on a set of 1148 sentences sampled from DailyMed labels that were manually annotated according to official annotation guidelines. Hence, strictly speaking, their method is not directly comparable to ours given the significant difference in available training data. Discussion While precision was similar between the three systems (with exceptions), we observed that our ensemble-based system benefited mostly from improved recall. This aligns with our initial expectation (based on prior experience with deep learning models) that an ensemble-based approach would improve stability and accuracy with deep neural models. Although including NLM-180 as training data resulted in significant performance gains during 11-fold cross validation, we find that the same improvements were not as dramatic on either test sets despite the 800% gain in training data. As such, we offer the following analysis. First, we suspect that there may be a semantic or annotation drift between these datasets as annotation guidelines evolve over time and as annotators become more experienced. To our knowledge, the datasets were annotated in the following order: NLM-180, Training-22, and finally Test Sets 1 and 2; moreover, Test Sets 1 and 2 were annotated by separate groups of annotators. Second, having few but higher quality examples may be more advantageous than having many but lower quality examples, at least for this particular task where evaluation is based on matching exact character offsets. Finally, we note that the top performing system exhibits superior performance on Test Set 1 compared to Test Set 2; interestingly, we observe an inverse of the scenario in our own system. This may be an indicator that our system struggles with data that is more “sparse” (as previously defined in Section SECREF2 ). Conclusion We presented a method for jointly extracting precipitants and their interaction types as part of a multi-task framework that additionally detects interaction outcome. Among three “runs”, a ten model voting-ensemble was our best performer. In future efforts, we will experiment with Graph Convolution Networks BIBREF21 over dependency trees as a “drop-in” replace for Bi-LSTMs to assess its suitability for this task. Acknowledgements This research was conducted during TT's participation in the Lister Hill National Center for Biomedical Communications (LHNCBC) Research Program in Medical Informatics for Graduate students at the U.S. National Library of Medicine, National Institutes of Health. HK is supported by the intramural research program at the U.S. National Library of Medicine, National Institutes of Health. RK and TT are also supported by the U.S. National Library of Medicine through grant R21LM012274.
Test set 1 contained 57 drug labels and 8208 sentences and test set 2 contained 66 drug labels and 4224 sentences
3752bbc5367973ab5b839ded08c57f51336b5c3d
3752bbc5367973ab5b839ded08c57f51336b5c3d_0
Q: What training data did they use? Text: Introduction Preventable adverse drug reactions (ADRs) introduce a growing concern in the modern healthcare system as they represent a large fraction of hospital admissions and play a significant role in increased health care costs BIBREF0 . Based on a study examining hospital admission data, it is estimated that approximately three to four percent of hospital admissions are caused by adverse events BIBREF1 ; moreover, it is estimated that between 53% and 58% of these events were due to medical errors BIBREF2 (and are therefore considered preventable). Such preventable adverse events have been cited as the eighth leading cause of death in the U.S., with an estimated fatality rate of between 44,000 and 98,000 each year BIBREF3 . As drug-drug interactions (DDIs) may lead to preventable ADRs, being able to extract DDIs from structured product labeling (SPL) documents for prescription drugs is an important effort toward effective dissemination of drug safety information. The Text Analysis Conference (TAC) is a series of workshops aimed at encouraging research in natural language processing (NLP) and related applications by providing large test collections along with a standard evaluation procedure. The Drug-Drug Interaction Extraction from Drug Labels track of TAC 2018 BIBREF4 , organized by the U.S. Food and Drug Administration (FDA) and U.S. National Library of Medicine (NLM), is established with the goal of transforming the contents of SPLs into a machine-readable format with linkage to standard terminologies. We focus on the first two tasks of the DDI track involving named entity recognition (NER) and relation extraction (RE). Task 1 is focused on identifying mentions in the text corresponding to precipitants, interaction triggers, and interaction effects. Precipitants are defined as substances, drugs, or a drug class involved in an interaction. Task 2 is focused on identifying sentence-level interactions; concretely, the goal is to identify the interacting precipitant, the type of the interaction, and outcome of the interaction. The interaction outcome depends on the interaction type as follows. Pharmacodynamic (PD) interactions are associated with a specified effect corresponding to a span within the text that describes the outcome of the interaction. Naturally, it is possible for a precipitant to be involved in multiple PD interactions. Pharmacokinetic (PK) interactions are associated with a label from a fixed vocabulary of National Cancer Institute (NCI) Thesaurus codes indicating various levels of increase/decrease in functional measurements. For example, consider the sentence: “There is evidence that treatment with phenytoin leads to to decrease intestinal absorption of furosemide, and consequently to lower peak serum furosemide concentrations.” Here, phenytoin is involved in a PK interaction with the label drug, furosemide, and the type of PK interaction is indicated by the NCI Thesaurus code C54615 which describes a decrease in the maximum serum concentration (C INLINEFORM0 ) of the label drug. Lastly, unspecified (UN) interactions are interactions with an outcome that is not explicitly stated in the text and usually indicated through cautionary statements. Figure FIGREF1 features a simple example of a PD interaction that is extracted from the drug label for Adenocard, where the precipitant is digitalis and the effect is “ventricular fibrillation.” Materials and Methods Herein, we describe the training and testing data involved in this task and the metrics used for evaluation. In Section SECREF5 , we describe our modeling approach, our deep learning architecture, and our training procedure. Datasets Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or more sentences. Each sentence is annotated with a list of zero or more mentions and interactions. The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems. As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). We provide summary statistics about these datasets in Table TABREF3 . Test Set 1 closely resembles Training-22 with respect to the sections that are annotated. However, Test Set 1 is more sparse in the sense that there are more sentences per drug label (144 vs. 27), with a smaller proportion of those sentences having gold annotations (23% vs. 51%). Test Set 2 is unique in that it contains annotations from only two sections, namely DRUG INTERACTIONS and CLINICAL PHARMACOLOGY, the latter of which is not represented in Training-22 (nor Test Set 1). Lastly, Training-22, Test Set 1, and Test Set 2 all vary with respect to the distribution of interaction types, with Training-22, Test Set 1, and Test Set 2 containing a higher proportion of PD, UN, and PK interactions respectively. Evaluation Metrics We used the official evaluation metrics for NER and relation extraction based on the standard precision, recall, and F1 micro-averaged over exactly matched entity/relation annotations. For either task, there are two matching criteria: primary and relaxed. For entity recognition, relaxed matching considers only entity bounds while primary matching considers entity bounds as well as the type of the entity. For relation extraction, relaxed matching only considers precipitant drug (and their bounds) while primary matching comprehensively considers precipitant drugs and, for each, the corresponding interaction type and interaction outcome. As relation extraction evaluation takes into account the bounds of constituent entity predictions, relation extraction performance is heavily reliant on entity recognition performance. On the other hand, we note that while NER evaluation considers trigger mentions, triggers are ignored when evaluating relation extraction performance. Methodology We propose a multi-task learning framework for extracting drug-drug interactions from drug labels. The framework involves branching paths for each training objective (corresponding to sub-tasks) such that parameters of earlier layers (i.e., the context encoder) are shared. Since only drugs involved in an interaction (precipitants) are annotated in the ground truth, we model the task of precipitant recognition and interaction type prediction jointly. We accomplish this by reducing the problem to a sequence tagging problem via a novel NER tagging scheme. That is, for each precipitant drug, we additionally encode the associated interaction type. Hence, there are five possible tags: T for trigger, E for effects, and D, K, and U for precipitants with pharmacodynamic, pharmacokinetic, and unspecified interactions respectively. As a preprocesssing step, we identify the label drug in the sentence, if it is mentioned, and bind it to a generic entity token (e.g. “LABELDRUG”). We additionally account for label drug aliases, such as the generic version of a brand-name drug, and bind them to the same entity token. Table TABREF7 shows how the tagging scheme is applied to the simple example in Figure FIGREF1 . A drawback is that simplifying assumptions must be made that will hamper recall; e.g., we only consider non-overlapping mentions (more later). Once we have identified the precipitant offsets (as well as of triggers/effects) and the interaction type for each precipitant, we subsequently predict the outcome or consequence of the interaction (if any). To that end, we consider all entity spans annotated with K tags and assign them a label from a static vocabulary of 20 NCI concept codes corresponding to PK consequence (i.e., multiclass classification) based on sentence-context. Likewise, we consider all entity spans annotated with D tags and link them to mention spans annotated with E tags; we accomplish this via binary classification of all pairwise combinations. For entity spans annotated with U tags, no outcome prediction is made. Our proposed deep neural network is illustrated in Figure FIGREF8 . We utilize Bi-directional Long Short-Term Memory networks (Bi-LSTMs) and convolutional neural networks (CNNs) designed for natural language processing as building blocks for our architecture BIBREF6 , BIBREF7 . Entity recognition and outcome prediction share common parameters via a Bi-LSTM context encoder that composes a context representation at each timestep based on input words mapped to dense embeddings and character-CNN composed representations. We use the same character-CNN representation as described in a prior work BIBREF8 ; however, in this work, we omit the character type embedding. A Bi-LSTM component is used to annotate IOB tags for joint entity recognition and interaction type prediction (or, NER prediction) while a CNN with two separate dense output layers (one for PK and one for PD interactions) is used for outcome prediction. We consider NER prediction to be the main objective with outcome prediction playing a secondary role. When predicting outcome, the contextual input is arranged such that candidate entity (and effect) mentions are bound to generic tokens; the resulting representation is referred to as “entity-bound word embeddings” in Figure FIGREF8 . We denote INLINEFORM0 as an abstract function, representing a standard bi-directional recurrent neural network with LSTM units, where INLINEFORM1 is the number of input vector representations (e.g., word embeddings) in the sequence and INLINEFORM2 and INLINEFORM3 are the dimensionality of the input and output representations respectively. We similarity denote INLINEFORM4 to represent a standard CNN that maps an INLINEFORM5 matrix to a vector representation of length INLINEFORM6 , where INLINEFORM7 is a list of window (or kernel) sizes that are used in the convolution. Let the input be a sentence of length INLINEFORM0 represented as a matrix INLINEFORM1 , where each row corresponds to a word embedding of length INLINEFORM2 . Moreover, let INLINEFORM3 represent the word at position INLINEFORM4 of the sentence such that each of the INLINEFORM5 rows correspond to a character embedding of length INLINEFORM6 . The purpose of the context encoder is to encode each word of the input with surrounding linguistic features and long-distance dependency information. To that end, we employ the use of a Bi-LSTM network to encode S as a context matrix INLINEFORM7 where INLINEFORM8 is a hyper-parameter of the network. Concretely, DISPLAYFORM0 where INLINEFORM0 denotes the INLINEFORM1 row of INLINEFORM2 and INLINEFORM3 is the vector concatenation operator. Essentially, for each word, we compose character representations using a CNN with a window size of three and concatenate them to pre-trained word embeddings; we stack the concatenated vectors as rows of a new matrix that is ultimately fed as input to the Bi-LSTM context encoder. The INLINEFORM4 row of INLINEFORM5 , denoted as INLINEFORM6 , represents the entire context centered at the INLINEFORM7 word. As an implementation detail, we chose INLINEFORM8 and INLINEFORM9 to be the maximum sentence and word length (according to the training data) respectively and pad shorter examples with zero vectors. The network for the NER objective manifests as a stacked Bi-LSTM architecture when we consider both the context encoder and the entity recognition component. Borrowing from residual networks BIBREF9 , we re-inforce the input by concatenating word embeddings to the intermediate context vectors before feeding it to the second Bi-LSTM layer. Concretely, the final entity recognition matrix INLINEFORM0 is composed such that DISPLAYFORM0 The output at each position INLINEFORM0 is INLINEFORM1 where INLINEFORM0 is the INLINEFORM1 row of INLINEFORM2 and INLINEFORM3 and INLINEFORM4 are network parameters such that INLINEFORM5 denotes the number of possible IOB tags such as O, B-K, I-K and so on. In order to obtain a categorical distribution, we apply the SoftMax function to INLINEFORM6 such that INLINEFORM7 where INLINEFORM0 is the vector of probability estimates serving as a categorical distribution over INLINEFORM1 tags for the word at position INLINEFORM2 . We optimize by computing the standard categorical cross-entropy loss for each of the INLINEFORM3 individual tag predictions. The final loss to be optimized is the mean over all INLINEFORM4 individually-computed losses. A stacked Bi-LSTM architecture improves over a single Bi-LSTM architecture given its capacity to learn deep contextualized embeddings. While we showed that the stacked approach is better for this particular task in Section SECREF19 , it is not necessarily the case that a stacked approach is better in general. We offer an alternative explanation and motivation for using a stacked architecture for this particular problem based on our initial intuition as follows. First, we note that a standalone Bi-LSTM is not able to handle the inference aspect of NER, which entails learning IOB constraints. As an example, in the IOB encoding scheme, it is not possible for a I-D tag to immediately follow a B-E tag; in this way, the prediction of a tag is directly dependent on the prediction of neighboring tags. This inference aspect is typically handled by a linear-chain CRF. We believe that a stacked Bi-LSTM at least partially handles this aspect in the sense that the first Bi-LSTM (the context encoder) is given the opportunity to form independent preliminary decisions while the second Bi-LSTM is tasked with to making final decisions (based on preliminary ones) that are more globally consistent with respect to IOB constraints. To predict outcome, we construct a secondary branch in the network path that involves convolving over the word and context embeddings made available in earlier layers. We first define a relation representation INLINEFORM0 that is produced by convolving with window sizes 3, 4, and 5 over the context vectors concatenated to entity-bound versions of the original input; concretely, INLINEFORM1 where INLINEFORM0 is the entity-bound version of INLINEFORM1 . Based on this outcome representation, we compose two separate softmax outputs: one for PK interactions and one for PD interactions. Concretely, the output layers are INLINEFORM2 and INLINEFORM0 where INLINEFORM0 and INLINEFORM1 are probability estimates serving as a categorical distribution over the outcome label space for PD and PK respectively and INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 are parameters of the network. For PK, INLINEFORM6 given there are 20 possible NCI Thesaurus codes corresponding to PK outcomes. For PD, INLINEFORM7 as it is a binary classification problem to assess whether the precipitant and effect pair encoded by INLINEFORM8 are linked. We optimize using the standard categorical cross-entropy loss on both objectives. In NLM-180, there is no distinction between triggers and effects; moreover, PK effects are limited to coarse-grained (binary) labels corresponding to increase or decrease in function measurements. Hence, a direct mapping from NLM-180 to Training-22 is impossible. As a compromise, NLM-180 “triggers” were mapped to Training-22 triggers in the case of unspecified and PK interactions. For PD interactions, we instead mapped NLM-180 “triggers” to Training-22 effects, which we believe to be appropriate based on our manual analysis of the data. Since we do not have both trigger and effect for every PD interaction, we opted to ignore trigger mentions altogether in the case of PD interactions to avoid introducing mixed signals. While trigger recognition has no bearing on relation extraction performance, this policy has the effect of reducing the recall upperbound on NER by about 25% (more later on upperbound). To overcome the lack of fine-grained annotations for PK outcome in NLM-180, we deploy the well-known bootstrapping approach BIBREF10 to incrementally annotate NLM-180 PK outcomes using Training-22 annotations as seed examples. To mitigate the problem of semantic drift, in each bootstrap cycle, we re-annotated by hand predictions that were not consistent with the original NLM-180 coarse annotations (i.e., active learning BIBREF11 ). We train the three objective losses (NER, PK outcome, and PD outcome) in an interleaved fashion at the minibatch BIBREF12 level. We use word embeddings of size 200 pre-trained on the PubMed corpus BIBREF13 as input to the network; these are further modified during back-propagation. For the character-level CNN, we set the character embedding size to 24 with 50 filters over a window size of 3; the final character-CNN composition is therefore of length 50. For each Bi-LSTM, the hidden size is set to 100 such that context vectors are 200 in length. For outcome prediction, we used window sizes of 3, 4, and 5 with 50 filters per window size; the final vector representation for outcome prediction is therefore 150 in length. A held-out development set of 4 drug labels is used for tuning and validation. The models are trained for 30 epochs with check-pointing; only the check-point with the best performance on the development set is kept for testing. We dynamically set the mini-batch size INLINEFORM0 as a function of the number of examples INLINEFORM1 such that the number of training iterations is roughly 300 per epoch (and also constant regardless of training data size); concretely, INLINEFORM2 . As a form of regularization, we apply dropout BIBREF14 at a rate of 50% on the hidden representations immediately after a Bi-LSTM or CNN composition. The outcome objectives are trained such that the gradients of the context encoder weights are downscaled by an order of magnitude (i.e., one tenth) to encourage learning at the later layers. When learning on the NER objective – the main branch of the network – the gradients are not downscaled in the same manner. Moreover, when training on the NER objective, we upweight the loss penalty on “relation” tags (non-O tags) by a factor of 10, which forces the model to prioritize differentiation between different types of interactions over span segmentation. We additionally upweight the loss penalty by a factor of 3 on Training-22 examples compared to NLM-180 examples. We optimize using the Adam BIBREF15 optimization method. These hyper-parameters were tuned during initial experiments. Results and Discussion In this section, we present and discuss the results of our cross-validation experiments. We then describe the “runs” that were submitted as challenge entries and present our official challenge results. We discuss these results in Section SECREF28 . Validation Results We present the results of our initial experiments in Table TABREF20 . Evaluations were produced as as result of 11-fold cross-validation over Training-22 with two drug labels per fold. Instead of macro-averaging over folds, and thereby weighting each fold equally, we evaluate on the union of all 11 test-fold predictions. The upperbound in Table TABREF20 is produced by reducing Training-22 (with gold labels) to our sequence-tagging format and then reverting it back to the original official XML format. Lowered recall is mostly due to simplifying assumptions; e.g., we only consider non-overlapping mentions. For coordinated disjoint cases such as “X and Y inducers”, we only considered “Y inducers” in our simplifying assumption. Imperfect precision is due to discrepancies between the tokenization scheme used by our method and that used to produce gold annotations; this leads to the occasional mismatch in entity offsets during evaluation. Using a stacked Bi-LSTM trained on the original 22 training examples (Table TABREF20 ; row 1) as our baseline, we make the following observations. Incorporating NLM-180 resulted in a significant boost of more than 20 F1-points in relation extraction performance and more than 10 F1-points in NER performance (Table TABREF20 ; row 2), despite the lowered upperbound on NER recall as mentioned in Section SECREF5 . Adding character-CNN based word representations improved performance marginally, more so for NER than relation extraction (Table TABREF20 ; row 3). We also implemented several tweaks to the pre-processing and post-processing aspects of the model based on preliminary error analysis including (1) using drug class mentions (e.g., “diuretics”) as proxies if the drug label is not mentioned directly; (2) removing modifiers such as moderate, strong, and potent so that output conforms to official annotation guidelines; and (3) purging predicted mentions with only stopwords or generic terms such as “drugs” or “agents.” These tweaks improved performance by more than two F1-points across both metrics (Table TABREF20 ; row 4). Based on early experiments with simpler models tuned on relaxed matching (not shown in Table TABREF20 and not directly comparable to results displayed in Table TABREF20 ), we found that a stacked Bi-LSTM architecture improves over a single Bi-LSTM by approximately four F1-points on relation extraction (55.59% vs. 51.55% F1 tuned on the relaxed matching criteria). We moreover found that omitting word embeddings as input at the second Bi-LSTM results in worse performance at 52.91% F1. We also experimented with using Temporal Convolution Networks (TCNs) BIBREF16 as a “drop-in” replacement for Bi-LSTMs. Our attempts involved replacing only the second Bi-LSTM with a TCN (Table TABREF20 ; row 4) as well as replacing both Bi-LSTMs with TCNs (Table TABREF20 ; row 5). The results of these early experiments were not promising and further fine-tuning may be necessary for better performance. Official Test Results Our final system submission is based on a stacked Bi-LSTM network with character-CNNs trained on both Training-22 and NLM-180 (corresponding to row 4 of Table TABREF20 ). We submitted the following three runs based on this architecture: A single model. An ensemble over ten models each trained with randomly initialized weights and a random development split. Intuitively, models collectively “vote” on predicted annotations that are kept and annotations that are discarded. A unique annotation (entity or relation) has one vote for each time it appears in one of the ten model prediction sets. In terms of implementation, unique annotations are incrementally added (to the final prediction set) in order of descending vote count; subsequent annotations that conflict (i.e., overlap based on character offsets) with existing annotations are discarded. Hence, we loosely refer to this approach as “voting-based” ensembling. A single model with pre/post-processing rules to handle modifier coordinations; for example, “X and Y inducers” would be correctly identified as two distinct entities corresponding to “X inducers” and “Y inducers.” Here, we essentially encoded “X and Y inducers” as a single entity when training the NER objective; during test time, we use simple rules based on pattern matching to split the joint “entity” into its constituents. Eight teams participated in task 1 while four teams participated in task 2. We record the relative performance of our system (among others in the top 5) on the two official test sets in Table TABREF24 . For each team, we only display the performance of the best run for a particular test set. Methods are grouped by the data used for training and ranked in ascending order of primary relation extraction performance followed by entity recognition performance. We also included a single model trained solely on Training-22, that was not submitted, for comparison. Our voting-based ensemble performed best among the three systems submitted by our team on both NER and relation extraction. In the official challenge, this model placed second overall on both NER and relation extraction. Tang et al. BIBREF20 boasts the top performing system on both tasks. In addition to Training-22 and NLM-180, the team trained and validated their models on a set of 1148 sentences sampled from DailyMed labels that were manually annotated according to official annotation guidelines. Hence, strictly speaking, their method is not directly comparable to ours given the significant difference in available training data. Discussion While precision was similar between the three systems (with exceptions), we observed that our ensemble-based system benefited mostly from improved recall. This aligns with our initial expectation (based on prior experience with deep learning models) that an ensemble-based approach would improve stability and accuracy with deep neural models. Although including NLM-180 as training data resulted in significant performance gains during 11-fold cross validation, we find that the same improvements were not as dramatic on either test sets despite the 800% gain in training data. As such, we offer the following analysis. First, we suspect that there may be a semantic or annotation drift between these datasets as annotation guidelines evolve over time and as annotators become more experienced. To our knowledge, the datasets were annotated in the following order: NLM-180, Training-22, and finally Test Sets 1 and 2; moreover, Test Sets 1 and 2 were annotated by separate groups of annotators. Second, having few but higher quality examples may be more advantageous than having many but lower quality examples, at least for this particular task where evaluation is based on matching exact character offsets. Finally, we note that the top performing system exhibits superior performance on Test Set 1 compared to Test Set 2; interestingly, we observe an inverse of the scenario in our own system. This may be an indicator that our system struggles with data that is more “sparse” (as previously defined in Section SECREF2 ). Conclusion We presented a method for jointly extracting precipitants and their interaction types as part of a multi-task framework that additionally detects interaction outcome. Among three “runs”, a ten model voting-ensemble was our best performer. In future efforts, we will experiment with Graph Convolution Networks BIBREF21 over dependency trees as a “drop-in” replace for Bi-LSTMs to assess its suitability for this task. Acknowledgements This research was conducted during TT's participation in the Lister Hill National Center for Biomedical Communications (LHNCBC) Research Program in Medical Informatics for Graduate students at the U.S. National Library of Medicine, National Institutes of Health. HK is supported by the intramural research program at the U.S. National Library of Medicine, National Institutes of Health. RK and TT are also supported by the U.S. National Library of Medicine through grant R21LM012274.
Training-22, NLM-180
30db81df46474363d5749d7f6a94b7ef95cd3e01
30db81df46474363d5749d7f6a94b7ef95cd3e01_0
Q: What domains do they experiment with? Text: Introduction The specificity of a sentence measures its “quality of belonging or relating uniquely to a particular subject” BIBREF0 . It is often pragmatically defined as the level of detail in the sentence BIBREF1 , BIBREF2 . When communicating, specificity is adjusted to serve the intentions of the writer or speaker BIBREF3 . In the examples below, the second sentence is clearly more specific than the first one: Ex1: This brand is very popular and many people use its products regularly. Ex2: Mascara is the most commonly worn cosmetic, and women will spend an average of $4,000 on it in their lifetimes. Studies have demonstrated the important role of sentence specificity in reading comprehension BIBREF4 and in establishing common ground in dialog BIBREF5 . It has also been shown to be a key property in analyses and applications, such as summarization BIBREF6 , argumentation mining BIBREF7 , political discourse analysis BIBREF8 , student discussion assessment BIBREF9 , BIBREF0 , deception detection BIBREF10 , and dialogue generation BIBREF11 . Despite their usefulness, prior sentence specificity predictors BIBREF1 , BIBREF2 , BIBREF0 are trained with sentences from specific domains (news or classroom discussions), and have been found to fall short when applied to other domains BIBREF10 , BIBREF0 . They are also trained to label a sentence as either general or specific BIBREF1 , BIBREF2 , or low/medium/high specificity BIBREF0 , though in practice specificity has been analyzed as a continuous value BIBREF6 , BIBREF12 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , as it should be BIBREF13 . In this work, we present an unsupervised domain adaptation system for sentence specificity prediction, specifically designed to output real-valued estimates. It effectively generalizes sentence specificity analysis to domains where no labeled data is available, and outputs values that are close to the real-world distribution of sentence specificity. Our main framework is an unsupervised domain adaptation system based on Self-Ensembling BIBREF14 , BIBREF15 that simultaneously reduces source prediction errors and generates feature representations that are robust against noise and across domains. Past applications of this technique have focused on computer vision problems; to make it effective for text processing tasks, we modify the network to better utilize labeled data in the source domain and explore several data augmentation methods for text. We further propose a posterior regularization technique BIBREF16 that generally applies to the scenario where it is easy to get coarse-grained categories of labels, but fine-grained predictions are needed. Specifically, our regularization term seeks to move the distribution of the classifier posterior probabilities closer to that of a pre-specified target distribution, which in our case is a specificity distribution derived from the source domain. Experimental results show that our system generates more accurate real-valued sentence specificity predictions that correlate well with human judgment, across three domains that are vastly different from the source domain (news): Twitter, Yelp reviews and movie reviews. Compared to a state-of-the-art system trained on news data BIBREF2 , our best setting achieves a 50%-68% reduction in mean absolute error and increases Kendall's Tau and Spearman correlations by 0.07-0.10 and 0.12-0.13, respectively. Finally, we conduct a task-based evaluation that demonstrates the usefulness of sentence specificity prediction in open-domain dialogue generation. Prior work showed that the quality of responses from dialog generation systems can be significantly improved if short examples are removed from training BIBREF17 , potentially preventing the system from overly favoring generic responses BIBREF18 , BIBREF19 . We show that predicted specificity works more effectively than length, and enables the system to generate more diverse and informative responses with better quality. In sum, the paper's contributions are as follows: Task setup With unsupervised domain adaptation, one has access to labeled sentence specificity in one source domain, and unlabeled sentences in all target domains. The goal is to predict the specificity of target domain data. Our source domain is news, the only domain with publicly available labeled data for training BIBREF1 . We crowdsource sentence specificity for evaluation for three target domains: Twitter, Yelp reviews and movie reviews. The data is described in Section SECREF4 . Existing sentence specificity labels in news are binary, i.e., a sentence is either general or specific. However, in practice, real-valued estimates of sentence specificity are widely adopted BIBREF6 , BIBREF12 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Most of these work directly uses the classifier posterior distributions, although we will later show that such distributions do not follow the true distribution of sentence specificity (see Figure FIGREF29 , Speciteller vs. real). We aim to produce accurate real-valued specificity estimates despite the binary training data. Specifically, the test sentences have real-valued labels between 0 and 1. We evaluate our system using mean absolute error, Kendall's Tau and Spearman correlation. Architecture Our core technique is the Self-Ensembling BIBREF14 , BIBREF15 of a single base classification model that utilizes data augmentation, with a distribution regularization term to generate accurate real-valued predictions from binary training labels. Base model Figure FIGREF5 depicts the base model for sentence specificity prediction. The overall structure is similar to that in BIBREF0 . Each word in the sentence is encoded into an embedding vector and the sentence is passed through a bidirectional Long-Short Term Memory Network (LSTM) BIBREF20 to generate a representation of the sentence. This representation is concatenated with a series of hand crafted features, and then passed through a multilayer perceptron to generate specificity predictions INLINEFORM0 . Training treats these as probabilities of the positive class, but we will show in Section SECREF13 how they can be adapted to make real-valued predictions. Our hand crafted features are taken from BIBREF2 , including: the number of tokens in the sentence; the number of numbers, capital letters and punctuations in the sentence, normalized by sentence length; the average number of characters in each word; the fraction of stop words; the number of words that can serve as explicit discourse connectives; the fraction of words that have sentiment polarity or are strongly subjective; the average familiarity and imageability of words; and the minimum, maximum, and average inverse document frequency (idf) over the words in the sentence. In preliminary experiments, we found that combining hand-crafted features with the BiLSTM performed better than using either one individually. Unsupervised domain adaptation Our unsupervised domain adaptation framework is based on Self-Ensembling BIBREF15 ; we lay out the core ideas first and then discuss modifications to make the framework suitable for textual data. There are two ideas acting as the driving force behind Self-Ensembling. First, adding noise to each data point INLINEFORM0 can help regularize the model by encouraging the model prediction to stay the same regardless of the noise, creating a manifold around the data point where predictions are invariant BIBREF22 . This can be achieved by minimizing a consistency loss INLINEFORM1 between the predictions. Second, temporal ensembling—ensemble the same model trained at different time steps—is shown to be beneficial to prediction especially in semi-supervised cases BIBREF23 ; in particular, we average model parameters from each time step BIBREF14 . These two ideas are realized by using a student network and a teacher network. The parameters of the teacher network are the exponential average of that of the student network, making the teacher a temporal ensemble of the student. Distinct noise augmentations are applied to the input of each network, hence the consistency loss INLINEFORM0 is applied between student and teacher predictions. The student learns from labeled source data and minimizes the supervised cross-entropy loss INLINEFORM1 . Domain adaptation is achieved by minimizing the consistency loss between the two networks, which can be done with unlabeled target data. The overall loss function is the weighted sum of INLINEFORM2 and INLINEFORM3 . Figure FIGREF6 depicts the process. Concretely, the student and teacher networks are of identical structure following the base model (Section SECREF7 ), but features distinct noise augmentation. The student network learns to predict sentence specificity from labeled source domain data. The input sentences are augmented with noise INLINEFORM0 . The teacher network predicts the specificity of each sentence with a different noise augmentation INLINEFORM1 . Parameters of the teacher network INLINEFORM2 are updated each time step to be the exponential moving average of the corresponding parameters in the student network. The teacher parameter INLINEFORM3 at each time step INLINEFORM4 is DISPLAYFORM0 where INLINEFORM0 is the degree of weighting decay, a constant between 0 and 1. INLINEFORM1 denotes parameters for the student network. The consistency loss is defined as the squared difference between the predictions of the student and teacher networks BIBREF14 : DISPLAYFORM0 where INLINEFORM0 denotes the base network and x denotes the input sentence. The teacher network is not involved when minimizing the supervised loss INLINEFORM1 . An important difference between our work and BIBREF15 is that in their work only unlabeled target data contributes to the consistency loss INLINEFORM0 . Instead, we use both source and target sentences, bringing the predictions of the two networks close to each other not only on the target domain, but also the source domain. Unlike many vision tasks where predictions intuitively stay the same with different types of image augmentation (e.g., transformation and scaling), text is more sensitive to noise. Self-Ensembling relies on heavy augmentation on both student and teacher networks, and our experiments revealed that incorporating source data in the consistency loss term mitigates additional biases from noise augmentation. At training time, the teacher network's parameters are fixed during gradient descent, and the gradient only propagates through the student network. After each update of the student network, we recalculate the weights of the teacher network using exponential moving average. At testing time, we use the teacher network for prediction. An important factor contributing to the effectiveness of Self-Ensembling is applying noise to the inputs to make them more robust against domain shifts. For computer vision tasks, augmentation techniques including affine transformation, scaling, flipping and cropping could be used BIBREF15 . However these operations could not be used on text. We designed several noise augmentations for sentences, including: adding Gaussian noise to both the word embeddings and shallow features; randomly removing words in a sentence; substituting word embeddings with a random vector, or a zero vector like applying dropout. To produce enough variations of data, augmentation is applied to INLINEFORM0 half of the words in a sentence. Regularizing the posterior distribution Sentence specificity prediction is a task where the existing training data have binary labels, while real-valued outputs are desirable. Prior work has directly used classifier posterior probabilities. However, the posterior distribution and the true specificity distribution are quite different (see Figure FIGREF29 , Speciteller vs. real). We propose a regularization term to bridge the gap between the two. Specifically, we view the posterior probability distribution as a latent distribution, which allows us to apply a variant of posterior regularization BIBREF16 , previously used to apply pre-specified constraints to latent variables in structured prediction. Here, we apply a distance penalty between the latent distribution and a pre-specified reference distribution (which, in our work, is from the source domain). BIBREF13 found that in news, the distribution of sentence specificity is bell shaped, similar to that of a Gaussian. Our analysis on sentence specificity for three target domains yields the same insights (Figure FIGREF18 ). We explored two regularization formulations, with and without assuming that the two distributions are Gaussian. Both were successful and achieved similar performance. Let INLINEFORM0 and INLINEFORM1 be the mean and standard deviation of the predictions (posterior probabilities) in a batch. The first formulation assumes that the predictions and reference distributions are Gaussian. It uses the KL divergence between the predicted distribution INLINEFORM2 and the reference Gaussian distribution INLINEFORM3 . The distribution regularization loss can be written as: DISPLAYFORM0 The second formulation does not assume Gaussian distributions and only compares the mean and standard deviation of the two distributions using a weighting term INLINEFORM0 : DISPLAYFORM0 Combining the regularization term INLINEFORM0 into a single objective, the total loss is thus: DISPLAYFORM0 where INLINEFORM0 is the cross entropy loss for the source domain predictions, INLINEFORM1 is the consistency loss. INLINEFORM2 and INLINEFORM3 are weighting hyperparameters. In practice, this regularization term serves a second purpose. After adding the consistency loss INLINEFORM0 , we observed that the predictions are mostly close to each other with values between 0.4 and 0.6, and their distribution resembles a Gaussian with very small variance (c.f. Figure FIGREF29 line SE+A). This might be due to the consistency loss pulling all the predictions together, since when all predictions are identical, the loss term will be zero. This regularization can be used to counter this effect, and avoid the condensation of predicted values. Finally, this regularization is distinct from class imbalance loss terms such as that used in BIBREF15 , which we found early on to hurt performance rather than helping. Source domain The source domain for sentence specificity is news, for which we use three publicly available labeled datasets: (1) training sentences from BIBREF1 and BIBREF2 , which consists of 1.4K general and 1.4K specific sentences from the Wall Street Journal. (2) 900 news sentences crowdsourced for binary general/specific labels BIBREF24 ; 55% of them are specific. (3) 543 news sentences from BIBREF13 . These sentences are rated on a scale of INLINEFORM0 , so for consistency with the rest of the training labels, we pick sentences with average rating INLINEFORM1 as general examples and those with average rating INLINEFORM2 as specific. In total, we have 4.3K sentences with binary labels in the source domain. Target domains We evaluate on three target domains: Twitter, Yelp and movie reviews. Since no annotated data exist for these domains, we crowdsource specificity for sentences sampled from each domain using Amazon Mechanical Turk. We follow the context-independent annotation instructions from BIBREF13 . Initially, 9 workers labeled specificity for 1000 sentences in each domain on a scale of 1 (very general) to 5 (very specific), which we rescaled to 0, 0.25, 0.5, 0.75, and 1. Inter-annotator agreement (IAA) is calculated using average Cronbach's alpha BIBREF25 values for each worker. For quality control, we exclude workers with IAA below 0.3, and include the remaining sentences that have at least 5 raters. Our final IAA values fall between 0.68-0.70, compatible with the reported 0.72 from expert annotators in BIBREF13 . The final specificity value is aggregated to be the average of the rescaled ratings. We also use large sets of unlabeled data in each domain: Twitter: 984 tweets annotated, 50K unlabeled, sampled from BIBREF26 . Yelp: 845 sentences annotated, 95K unlabeled, sampled from the Yelp Dataset Challenge 2015 BIBREF27 . Movie: 920 sentences annotated, 12K unlabeled, sampled from BIBREF28 . Figure FIGREF18 shows the distribution of ratings for the annotated data. We also plot a fitted Gaussian distribution for comparison. Clearly, most sentences have mid-range specificity values, consistent with news sentences BIBREF13 . Interestingly the mean and variance for the three distributions are similar to each other and to those from BIBREF13 , as show in Table TABREF23 . Therefore, we use the source distribution (news, BIBREF13 ) as the reference distribution for posterior distribution regularization, and set INLINEFORM0 to 0.417 and 0.227 accordingly. Experiments We now evaluate our framework on predicting sentence specificity for the three target domains. We report experimental results on a series of settings to evaluate the performance of different components. Systems Length baseline This simple baseline predicts specificity proportionally to the number of words in the sentence. Shorter sentences are predicted as more general and longer sentences are predicted as more specific. Speciteller baseline Speciteller BIBREF2 is a semi-supervised system trained on news data with binary labels. The posterior probabilities of the classifier are used directly as specificity values. Self-ensembling baseline (SE) Our system with the teacher network, but only using exponential moving average (without the consistency loss or distribution regularization). Distribution only (SE+D) Our system with distribution regularization INLINEFORM0 using the mean and standard deviation (Eq. EQREF15 ), but without the consistency loss INLINEFORM1 . Adaptation only (SE+A) Our system with the consistency loss INLINEFORM0 , but without distribution regularization INLINEFORM1 . SE+AD (KL) Our system with both INLINEFORM0 and INLINEFORM1 using KL divergence (Eq. EQREF14 ). SE+AD (mean-std) Our system with both INLINEFORM0 and INLINEFORM1 using mean and standard deviation (Eq. EQREF15 ). SE+AD (no augmentation) We also show the importance of noise augmentation, by benchmarking the same setup as SE+AD (mean-std) without data augmentation. Training details Hyperparameters are tuned on a validation set of 200 tweets that doesn't overlap with the test set. We then use this set of parameters for all testing domains. The LSTM encoder generates 100-dimensional representations. For the multilayer perceptron, we use 3 fully connected 100-dimensional layers. We use ReLU activation with batch normalization. For the Gaussian noise in data augmentation, we use standard deviation 0.1 for word embeddings and 0.2 for shallow features. The probabilities of deleting a word and replacing a word vector are 0.15. The exponential moving average decay INLINEFORM0 is 0.999. Dropout rate is 0.5 for all layers. The batch size is 32. INLINEFORM1 , INLINEFORM2 for KL loss and 100 for mean and std.dev loss. INLINEFORM3 . We fix the number of training to be 30 epochs for SE+A and SE+AD, 10 epochs for SE, and 15 epochs for SE+D. We use the Adam optimizer with learning rate 0.0001, INLINEFORM4 , INLINEFORM5 . As discussed in Section SECREF4 , posterior distribution regularization parameters INLINEFORM6 are set to be those from BIBREF13 . Evaluation metrics We use 3 metrics to evaluate real-valued predictions: (1) the Spearman correlation between the labeled and predicted specificity values, higher is better; (2) the pairwise Kendall's Tau correlation, higher is better; (3) mean absolute error (MAE): INLINEFORM0 , lower is better. Results and analysis Table TABREF25 shows the full results for the baselines and each configuration of our framework. For analysis, we also plot in Figure FIGREF29 the true specificity distributions in Twitter test set, predicted distributions for Speciteller, the Self-Ensembling baseline (SE), SE with adaptation (SE+A) and also with distribution regularization (SE+AD). Speciteller, which is trained on news sentences, cannot generalize well to other domains, as it performs worse than just using sentence length for two of the three domains (Yelp and Movie). From Figure FIGREF29 , we can see that the prediction mass of Speciteller is near the extrema values 0 and 1, and the rest of the predictions falls uniformly in between. These findings confirm the need of a generalizable system. Across all domains and all metrics, the best performing system is our full system with domain adaptation and distribution regularization (SE+AD with mean-std or KL), showing that the system generalizes well across different domains. Using a paired Wilcoxon test, it significantly ( INLINEFORM0 ) outperforms Speciteller in terms of MAE; it also achieved higher Spearman and Kendall's Tau correlations than both Length and Speciteller. Component-wise, the Self-Ensembling baseline (SE) achieves significantly lower MAE than Speciteller, and higher correlations than either baseline. Figure FIGREF29 shows that unlike Speciteller, the SE baseline does not have most of its prediction mass near 0 and 1, demonstrating the effectiveness of temporal ensembling. Using both the consistency loss INLINEFORM0 and the distribution regularization INLINEFORM1 achieves the best results on all three domains; however adding only INLINEFORM2 (SE+A) or INLINEFORM3 (SE+D) improves for some measures or domains but not all. This shows that both terms are crucial to make the system robust across domains. The improvements from distribution regularization are visualized in Figure FIGREF29 . With SE+A, most of the predicted labels are between 0.4 and 0.6. Applying distribution regularization (SE+AD) makes them much closer to the real distribution of specificity. With respect to the two formulations of regularization (KL and mean-std), both are effective in generating more accurate real-valued estimates. Their performances are comparable, hence using only the mean and standard deviation values, without explicitly modeling the reference Gaussian distribution, works equally well. Finally, without data augmentation (column no aug.), the correlations are clearly lower than our full model, stressing the importance of data augmentation in our framework. Specificity in dialogue generation We also evaluate our framework in open-domain dialogue. This experiment also presents a case for the usefulness of an effective sentence specificity system in dialogue generation. With seq2seq dialogue generation models, BIBREF17 observed significant quality improvement by removing training examples with short responses during preprocessing; this is potentially related to this type of models favor generating non-informative, generic responses BIBREF18 , BIBREF19 . We show that filtering training data by predicted specificity results in responses of higher quality and informativeness than filtering by length. Task and settings We implemented a seq2seq question-answering bot with attention using OpenNMT BIBREF29 . The bot is trained on OpenSubtitles BIBREF30 following prior work BIBREF17 . We restrict the instances to question-answer pairs by selecting consecutive sentences with the first sentence ending with question mark, the second sentence without question mark and follows the first sentence by less than 20 seconds, resulting in a 14M subcorpus. The model uses two hidden layers of size 2048, optimized with Adam with learning rate 0.001. The batch size is 64. While decoding, we use beam size 5; we block repeating n-grams, and constrain the minimum prediction length to be 5. These parameters are tuned on a development set. We compare two ways to filter training data during preprocessing: Remove Short: Following BIBREF17 , remove training examples with length of responses shorter than a threshold of 5. About half of the data are removed. Remove General: Remove predicted general responses from training examples using our system. We use the responses in the OpenSubtitles training set as the unlabeled target domain data during training. We remove the least specific responses such that the resulting number of examples is the same as Remove Short. For fair comparison, at test time, we adjust the length penalty described in BIBREF31 for both models, so the average response lengths are the same among both models. Evaluation We use automatic measures and human evaluation as in BIBREF32 and BIBREF17 . Table TABREF31 shows the diversity and perplexity of the responses. Diversity is calculated as the type-token ratio of unigrams and bigrams. The test set for these two metrics is a random sample of 10K instances from OpenSubtitles that doesn't overlap with the training set. Clearly, filtering training data according to specificity results in more diverse responses with lower perplexity than filtering by length. We also crowdsource human evaluation for quality; in addition, we evaluate the systems for response informativeness. Note that in our instructions, informativeness means the usefulness of information and is a distinct measure from specificity. The original training data of specificity are linguistically annotated and involves only the change in the level of details BIBREF1 . Separate experiments are conducted to avoid priming. We use a test set of 388 instances, including questions randomly sampled from OpenSubtitles that doesn't overlap with the training set, and 188 example questions from previous dialogue generation papers, including BIBREF33 . We use Amazon MechenicalTurk for crowdsourcing. 7 workers chose between the two responses to the same question. Table TABREF32 shows the human evaluation comparing Remove Short vs. Remove General. Removing predicted general responses performs better than removing short sentences, on both informativeness and quality and on both test sets. This shows that sentence specificity is a superior measure for training data preprocessing than sentence length. Related Work Sentence specificity prediction as a task is proposed by BIBREF1 , who repurposed discourse relation annotations from WSJ articles BIBREF34 for sentence specificity training. BIBREF2 incorporated more news sentences as unlabeled data. BIBREF0 developed a system to predict sentence specificity for classroom discussions, however the data is not publicly available. All these systems are classifiers trained with categorical data (2 or 3 classes). We use Self-Ensembling BIBREF15 as our underlying framework. Self-Ensembling builds on top of Temporal Ensembling BIBREF23 and the Mean-Teacher network BIBREF14 , both of which originally proposed for semi-supervised learning. In visual domain adaptation, Self-Ensembling shows superior performance than many recently proposed approaches BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 , BIBREF39 , BIBREF40 , including GAN-based approaches. To the best of our knowledge, this approach has not been used on language data. Conclusion We present a new model for predicting sentence specificity. We augment the Self-Ensembling method BIBREF15 for unsupervised domain adaptation on text data. We also regularize the distribution of predictions to match a reference distribution. Using only binary labeled sentences from news articles as source domain, our system could generate real-valued specificity predictions on different target domains, significantly outperforming previous work on sentence specificity prediction. Finally, we show that sentence specificity prediction can potentially be beneficial in improving the quality and informativeness of dialogue generation systems. Acknowledgments This research was partly supported by the Amazon Alexa Graduate Fellowship. We thank the anonymous reviewers for their helpful comments.
Twitter, Yelp reviews and movie reviews
5c26388a2c0b0452d529d5dd565a5375fdabdb70
5c26388a2c0b0452d529d5dd565a5375fdabdb70_0
Q: What games are used to test author's methods? Text: Introduction Text adventure games, in which players must make sense of the world through text descriptions and declare actions through natural language, can provide a stepping stone toward more real-world environments where agents must communicate to understand the state of the world and affect change in the world. Despite the steadily increasing body of research on text-adventure games BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, and in addition to the ubiquity of deep reinforcement learning applications BIBREF8, BIBREF9, teaching an agent to play text-adventure games remains a challenging task. Learning a control policy for a text-adventure game requires a significant amount of exploration, resulting in training runs that take hundreds of thousands of simulations BIBREF2, BIBREF7. One reason that text-adventure games require so much exploration is that most deep reinforcement learning algorithms are trained on a task without a real prior. In essence, the agent must learn everything about the game from only its interactions with the environment. Yet, text-adventure games make ample use of commonsense knowledge (e.g., an axe can be used to cut wood) and genre themes (e.g., in a horror or fantasy game, a coffin is likely to contain a vampire or other undead monster). This is in addition to the challenges innate to the text-adventure game itself—games are puzzles—which results in inefficient training. BIBREF7 developed a reinforcement learning agent that modeled the text environment as a knowledge graph and achieved state-of-the-art results on simple text-adventure games provided by the TextWorld BIBREF5 environment. They observed that a simple form of transfer from very similar games greatly improved policy training time. However, games beyond the toy TextWorld environments are beyond the reach of state-of-the-art techniques. In this paper, we explore the use of knowledge graphs and associated neural embeddings as a medium for domain transfer to improve training effectiveness on new text-adventure games. Specifically, we explore transfer learning at multiple levels and across different dimensions. We first look at the effects of playing a text-adventure game given a strong prior in the form of a knowledge graph extracted from generalized textual walk-throughs of interactive fiction as well as those made specifically for a given game. Next, we explore the transfer of control policies in deep Q-learning (DQN) by pre-training portions of a deep Q-network using question-answering and by DQN-to-DQN parameter transfer between games. We evaluate these techniques on two different sets of human authored and computer generated games, demonstrating that our transfer learning methods enable us to learn a higher-quality control policy faster. Background and Related Work Text-adventure games, in which an agent must interact with the world entirely through natural language, provide us with two challenges that have proven difficult for deep reinforcement learning to solve BIBREF2, BIBREF4, BIBREF7: (1) The agent must act based only on potentially incomplete textual descriptions of the world around it. The world is thus partially observable, as the agent does not have access to the state of the world at any stage. (2) the action space is combinatorially large—a consequence of the agent having to declare commands in natural language. These two problems together have kept commercial text adventure games out of the reach of existing deep reinforcement learning methods, especially given the fact that most of these methods attempt to train on a particular game from scratch. Text-adventure games can be treated as partially observable Markov decision processes (POMDPs). This can be represented as a 7-tuple of $\langle S,T,A, \Omega , O,R, \gamma \rangle $: the set of environment states, conditional transition probabilities between states, words used to compose text commands, observations, conditional observation probabilities, the reward function, and the discount factor respectively BIBREF5. Multiple recent works have explored the challenges associated with these games BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. BIBREF2 introduce the LSTM-DQN, which learns to score the action verbs and corresponding objects separately and then combine them into a single action. BIBREF1 propose the Deep Reinforcement Relevance Network that consists of separate networks to encode state and action information, with a final Q-value for a state-action pair that is computed between a pairwise interaction function between these. BIBREF4 present the Action Elimination Network (AEN), which restricts actions in a state to the top-k most likely ones, using the emulator's feedback. BIBREF10 design an agent that uses multiple modules to identify a general set of game play rules for text games across various domains. None of these works study how to transfer policies between different text-adventure games in any depth and so there exists a gap between the two bodies of work. Transferring policies across different text-adventure games requires implicitly learning a mapping between the games' state and action spaces. The more different the domain of the two games, the harder this task becomes. Previous work BIBREF7 introduced the use of knowledge graphs and question-answering pre-training to aid in the problems of partial observability and a combinatorial action space. This work made use of a system called TextWorld BIBREF5 that uses grammars to generate a series of similar (but not exact same) games. An oracle was used to play perfect games and the traces were used to pre-train portions of the agent's network responsible for encoding the observations, graph, and actions. Their results show that this form of pre-training improves the quality of the policy at convergence it does not show a significant improvement in the training time required to reach convergence. Further, it is generally unrealistic to have a corpus of very similar games to draw from. We build on this work, and explore modifications of this algorithm that would enable more efficient transfer in text-adventure games. Work in transfer in reinforcement learning has explored the idea of transferring skills BIBREF11, BIBREF12 or transferring value functions/policies BIBREF13. Other approaches attempt transfer in model-based reinforcement learning BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, though traditional approaches here rely heavily on hand crafting state-action mappings across domains. BIBREF19 learn to play games by predicting mappings across domains using a both deep Q-networks and value iteration networks, finding that that grounding the game state using natural language descriptions of the game itself aids significantly in transferring useful knowledge between domains. In transfer for deep reinforcement learning, BIBREF8 propose the Actor-Mimic network which learns from expert policies for a source task using policy distillation and then initializes the network for a target task using these parameters. BIBREF20 also use policy distillation, using task specific features as inputs to a multi-task policy network and use a hierarchical experience sampling method to train this multi-task network. Similarly, BIBREF21 attempt to transfer parameters by using frozen parameters trained on source tasks to help learn a new set of parameters on target tasks. BIBREF22 attempt something similar but use attention networks to transfer expert policies between tasks. These works, however, do not study the requirements for enabling efficient transfer for tasks rooted in natural language, nor do they explore the use of knowledge graphs as a state representation. Knowledge Graphs for DQNs A knowledge graph is a directed graph formed by a set of semantic, or RDF, triples in the form of $\langle subject, relation, object\rangle $—for example, $\langle vampires, are, undead\rangle $. We follow the open-world assumption that what is not in our knowledge graph can either be true or false. BIBREF7 introduced the Knowledge Graph DQN (KG-DQN) and touched on some aspects of transfer learning, showing that pre-training portions of the deep Q-network using question answering system on perfect playthroughs of a game increases the quality of the learned control policy for a generated text-adventure game. We build on this work and use KG-DQN to explore transfer with both knowledge graphs and network parameters. Specifically we seek to transfer skills and knowledge from (a) static text documents describing game play and (b) from playing one text-adventure game to a second complete game in in the same genre (e.g., horror games). The rest of this section describes KG-DQN in detail and summarizes our modifications. For each step that the agent takes, it automatically extracts a set of RDF triples from the received observation through the use of OpenIE BIBREF23 in addition to a few rules to account for the regularities of text-adventure games. The graph itself is more or less a map of the world, with information about objects' affordances and attributes linked to the rooms that they are place in in a map. The graph also makes a distinction with respect to items that are in the agent's possession or in their immediate surrounding environment. We make minor modifications to the rules used in BIBREF7 to better achieve such a graph in general interactive fiction environments. The agent also has access to all actions accepted by the game's parser, following BIBREF2. For general interactive fiction environments, we develop our own method to extract this information. This is done by extracting a set of templates accepted by the parser, with the objects or noun phrases in the actions replaces with a OBJ tag. An example of such a template is "place OBJ in OBJ". These OBJ tags are then filled in by looking at all possible objects in the given vocabulary for the game. This action space is of the order of $A=\mathcal {O}(|V| \times |O|^2)$ where $V$ is the number of action verbs, and $O$ is the number of distinct objects in the world that the agent can interact with. As this is too large a space for a RL agent to effectively explore, the knowledge graph is used to prune this space by ranking actions based on their presence in the current knowledge graph and the relations between the objects in the graph as in BIBREF7 The architecture for the deep Q-network consists of two separate neural networks—encoding state and action separately—with the final $Q$-value for a state-action pair being the result of a pair-wise interaction function between the two (Figure FIGREF2). We train with a standard DQN training loop; the policy is determined by the $Q$-value of a particular state-action pair, which is updated using the Bellman equation BIBREF24: where $\gamma $ refers to the discount factor and $r_{t+1}$ is the observed reward. The whole system is trained using prioritized experience replay BIBREF25, a modified version of $\epsilon $-greedy learning, and a temporal difference loss that is computed as: where $\mathbf {A_{k+1}}$ represents the action set at step $k$ + 1 and $\mathbf {s_t,a_t}$ refer to the encoded state and action representations respectively. Knowledge Graph Seeding In this section we consider the problem of transferring a knowledge graph from a static text resource to a DQN—which we refer to as seeding. KG-DQN uses a knowledge graph as a state representation and also to prune the action space. This graph is built up over time, through the course of the agent's exploration. When the agent first starts the game, however, this graph is empty and does not help much in the action pruning process. The agent thus wastes a large number of steps near the beginning of each game exploring ineffectively. The intuition behind seeding the knowledge graph from another source is to give the agent a prior on which actions have a higher utility and thereby enabling more effective exploration. Text-adventure games typically belong to a particular genre of storytelling—e.g., horror, sci-fi, or soap opera—and an agent is at a distinct disadvantage if it doesn't have any genre knowledge. Thus, the goal of seeding is to give the agent a strong prior. This seed knowledge graph is extracted from online general text-adventure guides as well as game/genre specific guides when available. The graph is extracted from this the guide using a subset of the rules described in Section SECREF3 used to extract information from the game observations, with the remainder of the RDF triples coming from OpenIE. There is no map of rooms in the environment that can be built, but it is possible to extract information regarding affordances of frequently occurring objects as well as common actions that can be performed across a wide range of text-adventure games. This extracted graph is thus potentially disjoint, containing only this generalizable information, in contrast to the graph extracted during the rest of the exploration process. An example of a graph used to seed KG-DQN is given in Fig. FIGREF5. The KG-DQN is initialized with this knowledge graph. Task Specific Transfer The overarching goal of transfer learning in text-adventure games is to be able to train an agent on one game and use this training on to improve the learning capabilities of another. There is growing body of work on improving training times on target tasks by transferring network parameters trained on source tasks BIBREF21, BIBREF20, BIBREF22. Of particular note is the work by BIBREF21, where they train a policy on a source task and then use this to help learn a new set of parameters on a target task. In this approach, decisions made during the training of the target task are jointly made using the frozen parameters of the transferred policy network as well as the current policy network. Our system first trains a question-answering system BIBREF26 using traces given by an oracle, as in Section SECREF4. For commercial text-adventure games, these traces take the form of state-action pairs generated using perfect walkthrough descriptions of the game found online as described in Section SECREF4. We use the parameters of the question-answering system to pre-train portions of the deep Q-network for a different game within in the same domain. The portions that are pre-trained are the same parts of the architecture as in BIBREF7. This game is referred to as the source task. The seeding of the knowledge graph is not strictly necessary but given that state-of-the-art DRL agents cannot complete real games, this makes the agent more effective at the source task. We then transfer the knowledge and skills acquired from playing the source task to another game from the same genre—the target task. The parameters of the deep Q-network trained on the source game are used to initialize a new deep Q-network for the target task. All the weights indicated in the architecture of KG-DQN as shown in Fig. FIGREF2 are transferred. Unlike BIBREF21, we do not freeze the parameters of the deep Q-network trained on the source task nor use the two networks to jointly make decisions but instead just use it to initialize the parameters of the target task deep Q-network. This is done to account for the fact that although graph embeddings can be transferred between games, the actual graph extracted from a game is non-transferable due to differences in structure between the games. Experiments We test our system on two separate sets of games in different domains using the Jericho and TextWorld frameworks BIBREF27, BIBREF5. The first set of games is “slice of life” themed and contains games that involve mundane tasks usually set in textual descriptions of normal houses. The second set of games is “horror” themed and contains noticeably more difficult games with a relatively larger vocabulary size and action set, non-standard fantasy names, etc. We choose these domains because of the availability of games in popular online gaming communities, the degree of vocabulary overlap within each theme, and overall structure of games in each theme. Specifically, there must be at least three games in each domain: at least one game to train the question-answering system on, and two more to train the parameters of the source and target task deep Q-networks. A summary of the statistics for the games is given in Table TABREF7. Vocabulary overlap is calculated by measuring the percentage of overlap between a game's vocabulary and the domain's vocabulary, i.e. the union of the vocabularies for all the games we use within the domain. We observe that in both of these domains, the complexity of the game increases steadily from the game used for the question-answering system to the target and then source task games. We perform ablation tests within each domain, mainly testing the effects of transfer from seeding, oracle-based question-answering, and source-to-target parameter transfer. Additionally, there are a couple of extra dimensions of ablations that we study, specific to each of the domains and explained below. All experiments are run three times using different random seeds. For all the experiments we report metrics known to be important for transfer learning tasks BIBREF28, BIBREF19: average reward collected in the first 50 episodes (init. reward), average reward collected for 50 episodes after convergence (final reward), and number of steps taken to finish the game for 50 episodes after convergence (steps). For the metrics tested after convergence, we set $\epsilon =0.1$ following both BIBREF2 and BIBREF7. We use similar hyperparameters to those reported in BIBREF7 for training the KG-DQN with action pruning, with the main difference being that we use 100 dimensional word embeddings instead of 50 dimensions for the horror genre. Experiments ::: Slice of Life Experiments TextWorld uses a grammar to generate similar games. Following BIBREF7, we use TextWorld's “home” theme to generate the games for the question-answering system. TextWorld is a framework that uses a grammar to randomly generate game worlds and quests. This framework also gives us information such as instructions on how to finish the quest, and a list of actions that can be performed at each step based on the current world state. We do not let our agent access this additional solution information or admissible actions list. Given the relatively small quest length for TextWorld games—games can be completed in as little as 5 steps—we generate 50 such games and partition them into train and test sets in a 4:1 ratio. The traces are generated on the training set, and the question-answering system is evaluated on the test set. We then pick a random game from the test set to train our source task deep Q-network for this domain. For this training, we use the reward function provided by TextWorld: +1 for each action taken that moves the agent closer to finishing the quest; -1 for each action taken that extends the minimum number of steps needed to finish the quest from the current stage; 0 for all other situations. We choose the game, 9:05 as our target task game due to similarities in structure in addition to the vocabulary overlap. Note that there are multiple possible endings to this game and we pick the simplest one for the purpose of training our agent. Experiments ::: Horror Experiments For the horror domain, we choose Lurking Horror to train the question-answering system on. The source and target task games are chosen as Afflicted and Anchorhead respectively. However, due to the size and complexity of these two games some modifications to the games are required for the agent to be able to effectively solve them. We partition each of these games and make them smaller by reducing the final goal of the game to an intermediate checkpoint leading to it. This checkpoints were identified manually using walkthroughs of the game; each game has a natural intermediate goal. For example, Anchorhead is segmented into 3 chapters in the form of objectives spread across 3 days, of which we use only the first chapter. The exact details of the games after partitioning is described in Table TABREF7. For Lurking Horror, we report numbers relevant for the oracle walkthrough. We then pre-prune the action space and use only the actions that are relevant for the sections of the game that we have partitioned out. The majority of the environment is still available for the agent to explore but the game ends upon completion of the chosen intermediate checkpoint. Experiments ::: Reward Augmentation The combined state-action space for a commercial text-adventure game is quite large and the corresponding reward function is very sparse in comparison. The default, implied reward signal is to receive positive value upon completion of the game, and no reward value elsewhere. This is problematic from an experimentation perspective as text-adventure games are too complex for even state-of-the-art deep reinforcement learning agents to complete. Even using transfer learning methods, a sparse reward signal usually results in ineffective exploration by the agent. To make experimentation feasible, we augment the reward to give the agent a dense reward signal. Specifically, we use an oracle to generate state-action traces (identical to how as when training the question-answering system). An oracle is an agent that is capable of playing and finishing a game perfectly in the least number of steps possible. The state-action pairs generated using perfect walkthroughs of the game are then used as checkpoints and used to give the agent additional reward. If the agent encounters any of these state-action pairs when training, i.e. performs the right action given a corresponding state, it receives a proportional reward in addition to the standard reward built into the game. This reward is scaled based on the game and is designed to be less than the smallest reward given by the original reward function to prevent it from overpowering the built-in reward. We refer to agents using this technique as having “dense” reward and “sparse” reward otherwise. The agent otherwise receives no information from the oracle about how to win the game. Results/Discussion The structure of the experiments are such that the for each of the domains, the target task game is more complex that the source task game. The slice of life games are also generally less complex than the horror games; they have a simpler vocabulary and a more linear quest structure. Additionally, given the nature of interactive fiction games, it is nearly impossible—even for human players—to achieve completion in the minimum number of steps (as given by the steps to completion in Table TABREF7); each of these games are puzzle based and require extensive exploration and interaction with various objects in the environment to complete. Table TABREF19 and Table TABREF20 show results for the slice of life and horror domains, respectively. In both domains seeding and QA pre-training improve performance by similar amounts from the baseline on both the source and target task games. A series of t-tests comparing the results of the pre-training and graph seeding with the baseline KG-DQN show that all results are significant with $p<0.05$. Both the pre-training and graph seeding perform similar functions in enabling the agent to explore more effectively while picking high utility actions. Even when untuned, i.e. evaluating the agent on the target task after having only trained on the source task, the agent shows better performance than training on the target task from scratch using the sparse reward. As expected, we see a further gain in performance when the dense reward function is used for both of these domains as well. In the horror domain, the agent fails to converge to a state where it is capable of finishing the game without the dense reward function due to the horror games being more complex. When an agent is trained using on just the target task horror game, Anchorhead, it does not converge to completion and only gets as far as achieving a reward of approximately 7 (max. observed reward from the best model is 41). This corresponds to a point in the game where the player is required to use a term in an action that the player has never observed before, “look up Verlac” when in front of a certain file cabinet—“Verlac“ being the unknown entity. Without seeding or QA pre-training, the agent is unable to cut down the action space enough to effectively explore and find the solution to progress further. The relative effectiveness of the gains in initial reward due to seeding appears to depend on the game and the corresponding static text document. In all situations except Anchohead, seeding provides comparable gains in initial reward as compared to QA — there is no statistical difference between the two when performing similar t-tests. When the full system is used—i.e. we seed the knowledge graph, pre-train QA, then train the source task game, then the target task game using the augmented reward function—we see a significant gain in performance, up to an 80% gain in terms of completion steps in some cases. The bottleneck at reward 7 is still difficult to pass, however, as seen in Fig. FIGREF18, in which we can see that the agent spends a relatively long time around this reward level unless the full transfer technique is used. We further see in Figures FIGREF17, FIGREF18 that transferring knowledge results in the agent learning this higher quality policy much faster. In fact, we note that training a full system is more efficient than just training the agent on a single task, i.e. training a QA system then a source task game for 50 episodes then transferring and training a seeded target task game for 50 episodes is more effective than just training the target task game by itself for even 150+ episodes. Conclusions We have demonstrated that using knowledge graphs as a state representation enables efficient transfer between deep reinforcement learning agents designed to play text-adventure games, reducing training times and increasing the quality of the learned control policy. Our results show that we are able to extract a graph from a general static text resource and use that to give the agent knowledge regarding domain specific vocabulary, object affordances, etc. Additionally, we demonstrate that we can effectively transfer knowledge using deep Q-network parameter weights, either by pre-training portions of the network using a question-answering system or by transferring parameters from a source to a target game. Our agent trains faster overall, including the number of episodes required to pre-train and train on a source task, and performs up to 80% better on convergence than an agent not utilizing these techniques. We conclude that knowledge graphs enable transfer in deep reinforcement learning agents by providing the agent with a more explicit–and interpretable–mapping between the state and action spaces of different games. This mapping helps overcome the challenges twin challenges of partial observability and combinatorially large action spaces inherent in all text-adventure games by allowing the agent to better explore the state-action space. Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. IIS-1350339. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Lurking Horror, Afflicted, Anchorhead, 9:05, TextWorld games
184e1f28f96babf468f2bb4e1734f69646590cda
184e1f28f96babf468f2bb4e1734f69646590cda_0
Q: How is the domain knowledge transfer represented as knowledge graph? Text: Introduction Text adventure games, in which players must make sense of the world through text descriptions and declare actions through natural language, can provide a stepping stone toward more real-world environments where agents must communicate to understand the state of the world and affect change in the world. Despite the steadily increasing body of research on text-adventure games BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, and in addition to the ubiquity of deep reinforcement learning applications BIBREF8, BIBREF9, teaching an agent to play text-adventure games remains a challenging task. Learning a control policy for a text-adventure game requires a significant amount of exploration, resulting in training runs that take hundreds of thousands of simulations BIBREF2, BIBREF7. One reason that text-adventure games require so much exploration is that most deep reinforcement learning algorithms are trained on a task without a real prior. In essence, the agent must learn everything about the game from only its interactions with the environment. Yet, text-adventure games make ample use of commonsense knowledge (e.g., an axe can be used to cut wood) and genre themes (e.g., in a horror or fantasy game, a coffin is likely to contain a vampire or other undead monster). This is in addition to the challenges innate to the text-adventure game itself—games are puzzles—which results in inefficient training. BIBREF7 developed a reinforcement learning agent that modeled the text environment as a knowledge graph and achieved state-of-the-art results on simple text-adventure games provided by the TextWorld BIBREF5 environment. They observed that a simple form of transfer from very similar games greatly improved policy training time. However, games beyond the toy TextWorld environments are beyond the reach of state-of-the-art techniques. In this paper, we explore the use of knowledge graphs and associated neural embeddings as a medium for domain transfer to improve training effectiveness on new text-adventure games. Specifically, we explore transfer learning at multiple levels and across different dimensions. We first look at the effects of playing a text-adventure game given a strong prior in the form of a knowledge graph extracted from generalized textual walk-throughs of interactive fiction as well as those made specifically for a given game. Next, we explore the transfer of control policies in deep Q-learning (DQN) by pre-training portions of a deep Q-network using question-answering and by DQN-to-DQN parameter transfer between games. We evaluate these techniques on two different sets of human authored and computer generated games, demonstrating that our transfer learning methods enable us to learn a higher-quality control policy faster. Background and Related Work Text-adventure games, in which an agent must interact with the world entirely through natural language, provide us with two challenges that have proven difficult for deep reinforcement learning to solve BIBREF2, BIBREF4, BIBREF7: (1) The agent must act based only on potentially incomplete textual descriptions of the world around it. The world is thus partially observable, as the agent does not have access to the state of the world at any stage. (2) the action space is combinatorially large—a consequence of the agent having to declare commands in natural language. These two problems together have kept commercial text adventure games out of the reach of existing deep reinforcement learning methods, especially given the fact that most of these methods attempt to train on a particular game from scratch. Text-adventure games can be treated as partially observable Markov decision processes (POMDPs). This can be represented as a 7-tuple of $\langle S,T,A, \Omega , O,R, \gamma \rangle $: the set of environment states, conditional transition probabilities between states, words used to compose text commands, observations, conditional observation probabilities, the reward function, and the discount factor respectively BIBREF5. Multiple recent works have explored the challenges associated with these games BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. BIBREF2 introduce the LSTM-DQN, which learns to score the action verbs and corresponding objects separately and then combine them into a single action. BIBREF1 propose the Deep Reinforcement Relevance Network that consists of separate networks to encode state and action information, with a final Q-value for a state-action pair that is computed between a pairwise interaction function between these. BIBREF4 present the Action Elimination Network (AEN), which restricts actions in a state to the top-k most likely ones, using the emulator's feedback. BIBREF10 design an agent that uses multiple modules to identify a general set of game play rules for text games across various domains. None of these works study how to transfer policies between different text-adventure games in any depth and so there exists a gap between the two bodies of work. Transferring policies across different text-adventure games requires implicitly learning a mapping between the games' state and action spaces. The more different the domain of the two games, the harder this task becomes. Previous work BIBREF7 introduced the use of knowledge graphs and question-answering pre-training to aid in the problems of partial observability and a combinatorial action space. This work made use of a system called TextWorld BIBREF5 that uses grammars to generate a series of similar (but not exact same) games. An oracle was used to play perfect games and the traces were used to pre-train portions of the agent's network responsible for encoding the observations, graph, and actions. Their results show that this form of pre-training improves the quality of the policy at convergence it does not show a significant improvement in the training time required to reach convergence. Further, it is generally unrealistic to have a corpus of very similar games to draw from. We build on this work, and explore modifications of this algorithm that would enable more efficient transfer in text-adventure games. Work in transfer in reinforcement learning has explored the idea of transferring skills BIBREF11, BIBREF12 or transferring value functions/policies BIBREF13. Other approaches attempt transfer in model-based reinforcement learning BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, though traditional approaches here rely heavily on hand crafting state-action mappings across domains. BIBREF19 learn to play games by predicting mappings across domains using a both deep Q-networks and value iteration networks, finding that that grounding the game state using natural language descriptions of the game itself aids significantly in transferring useful knowledge between domains. In transfer for deep reinforcement learning, BIBREF8 propose the Actor-Mimic network which learns from expert policies for a source task using policy distillation and then initializes the network for a target task using these parameters. BIBREF20 also use policy distillation, using task specific features as inputs to a multi-task policy network and use a hierarchical experience sampling method to train this multi-task network. Similarly, BIBREF21 attempt to transfer parameters by using frozen parameters trained on source tasks to help learn a new set of parameters on target tasks. BIBREF22 attempt something similar but use attention networks to transfer expert policies between tasks. These works, however, do not study the requirements for enabling efficient transfer for tasks rooted in natural language, nor do they explore the use of knowledge graphs as a state representation. Knowledge Graphs for DQNs A knowledge graph is a directed graph formed by a set of semantic, or RDF, triples in the form of $\langle subject, relation, object\rangle $—for example, $\langle vampires, are, undead\rangle $. We follow the open-world assumption that what is not in our knowledge graph can either be true or false. BIBREF7 introduced the Knowledge Graph DQN (KG-DQN) and touched on some aspects of transfer learning, showing that pre-training portions of the deep Q-network using question answering system on perfect playthroughs of a game increases the quality of the learned control policy for a generated text-adventure game. We build on this work and use KG-DQN to explore transfer with both knowledge graphs and network parameters. Specifically we seek to transfer skills and knowledge from (a) static text documents describing game play and (b) from playing one text-adventure game to a second complete game in in the same genre (e.g., horror games). The rest of this section describes KG-DQN in detail and summarizes our modifications. For each step that the agent takes, it automatically extracts a set of RDF triples from the received observation through the use of OpenIE BIBREF23 in addition to a few rules to account for the regularities of text-adventure games. The graph itself is more or less a map of the world, with information about objects' affordances and attributes linked to the rooms that they are place in in a map. The graph also makes a distinction with respect to items that are in the agent's possession or in their immediate surrounding environment. We make minor modifications to the rules used in BIBREF7 to better achieve such a graph in general interactive fiction environments. The agent also has access to all actions accepted by the game's parser, following BIBREF2. For general interactive fiction environments, we develop our own method to extract this information. This is done by extracting a set of templates accepted by the parser, with the objects or noun phrases in the actions replaces with a OBJ tag. An example of such a template is "place OBJ in OBJ". These OBJ tags are then filled in by looking at all possible objects in the given vocabulary for the game. This action space is of the order of $A=\mathcal {O}(|V| \times |O|^2)$ where $V$ is the number of action verbs, and $O$ is the number of distinct objects in the world that the agent can interact with. As this is too large a space for a RL agent to effectively explore, the knowledge graph is used to prune this space by ranking actions based on their presence in the current knowledge graph and the relations between the objects in the graph as in BIBREF7 The architecture for the deep Q-network consists of two separate neural networks—encoding state and action separately—with the final $Q$-value for a state-action pair being the result of a pair-wise interaction function between the two (Figure FIGREF2). We train with a standard DQN training loop; the policy is determined by the $Q$-value of a particular state-action pair, which is updated using the Bellman equation BIBREF24: where $\gamma $ refers to the discount factor and $r_{t+1}$ is the observed reward. The whole system is trained using prioritized experience replay BIBREF25, a modified version of $\epsilon $-greedy learning, and a temporal difference loss that is computed as: where $\mathbf {A_{k+1}}$ represents the action set at step $k$ + 1 and $\mathbf {s_t,a_t}$ refer to the encoded state and action representations respectively. Knowledge Graph Seeding In this section we consider the problem of transferring a knowledge graph from a static text resource to a DQN—which we refer to as seeding. KG-DQN uses a knowledge graph as a state representation and also to prune the action space. This graph is built up over time, through the course of the agent's exploration. When the agent first starts the game, however, this graph is empty and does not help much in the action pruning process. The agent thus wastes a large number of steps near the beginning of each game exploring ineffectively. The intuition behind seeding the knowledge graph from another source is to give the agent a prior on which actions have a higher utility and thereby enabling more effective exploration. Text-adventure games typically belong to a particular genre of storytelling—e.g., horror, sci-fi, or soap opera—and an agent is at a distinct disadvantage if it doesn't have any genre knowledge. Thus, the goal of seeding is to give the agent a strong prior. This seed knowledge graph is extracted from online general text-adventure guides as well as game/genre specific guides when available. The graph is extracted from this the guide using a subset of the rules described in Section SECREF3 used to extract information from the game observations, with the remainder of the RDF triples coming from OpenIE. There is no map of rooms in the environment that can be built, but it is possible to extract information regarding affordances of frequently occurring objects as well as common actions that can be performed across a wide range of text-adventure games. This extracted graph is thus potentially disjoint, containing only this generalizable information, in contrast to the graph extracted during the rest of the exploration process. An example of a graph used to seed KG-DQN is given in Fig. FIGREF5. The KG-DQN is initialized with this knowledge graph. Task Specific Transfer The overarching goal of transfer learning in text-adventure games is to be able to train an agent on one game and use this training on to improve the learning capabilities of another. There is growing body of work on improving training times on target tasks by transferring network parameters trained on source tasks BIBREF21, BIBREF20, BIBREF22. Of particular note is the work by BIBREF21, where they train a policy on a source task and then use this to help learn a new set of parameters on a target task. In this approach, decisions made during the training of the target task are jointly made using the frozen parameters of the transferred policy network as well as the current policy network. Our system first trains a question-answering system BIBREF26 using traces given by an oracle, as in Section SECREF4. For commercial text-adventure games, these traces take the form of state-action pairs generated using perfect walkthrough descriptions of the game found online as described in Section SECREF4. We use the parameters of the question-answering system to pre-train portions of the deep Q-network for a different game within in the same domain. The portions that are pre-trained are the same parts of the architecture as in BIBREF7. This game is referred to as the source task. The seeding of the knowledge graph is not strictly necessary but given that state-of-the-art DRL agents cannot complete real games, this makes the agent more effective at the source task. We then transfer the knowledge and skills acquired from playing the source task to another game from the same genre—the target task. The parameters of the deep Q-network trained on the source game are used to initialize a new deep Q-network for the target task. All the weights indicated in the architecture of KG-DQN as shown in Fig. FIGREF2 are transferred. Unlike BIBREF21, we do not freeze the parameters of the deep Q-network trained on the source task nor use the two networks to jointly make decisions but instead just use it to initialize the parameters of the target task deep Q-network. This is done to account for the fact that although graph embeddings can be transferred between games, the actual graph extracted from a game is non-transferable due to differences in structure between the games. Experiments We test our system on two separate sets of games in different domains using the Jericho and TextWorld frameworks BIBREF27, BIBREF5. The first set of games is “slice of life” themed and contains games that involve mundane tasks usually set in textual descriptions of normal houses. The second set of games is “horror” themed and contains noticeably more difficult games with a relatively larger vocabulary size and action set, non-standard fantasy names, etc. We choose these domains because of the availability of games in popular online gaming communities, the degree of vocabulary overlap within each theme, and overall structure of games in each theme. Specifically, there must be at least three games in each domain: at least one game to train the question-answering system on, and two more to train the parameters of the source and target task deep Q-networks. A summary of the statistics for the games is given in Table TABREF7. Vocabulary overlap is calculated by measuring the percentage of overlap between a game's vocabulary and the domain's vocabulary, i.e. the union of the vocabularies for all the games we use within the domain. We observe that in both of these domains, the complexity of the game increases steadily from the game used for the question-answering system to the target and then source task games. We perform ablation tests within each domain, mainly testing the effects of transfer from seeding, oracle-based question-answering, and source-to-target parameter transfer. Additionally, there are a couple of extra dimensions of ablations that we study, specific to each of the domains and explained below. All experiments are run three times using different random seeds. For all the experiments we report metrics known to be important for transfer learning tasks BIBREF28, BIBREF19: average reward collected in the first 50 episodes (init. reward), average reward collected for 50 episodes after convergence (final reward), and number of steps taken to finish the game for 50 episodes after convergence (steps). For the metrics tested after convergence, we set $\epsilon =0.1$ following both BIBREF2 and BIBREF7. We use similar hyperparameters to those reported in BIBREF7 for training the KG-DQN with action pruning, with the main difference being that we use 100 dimensional word embeddings instead of 50 dimensions for the horror genre. Experiments ::: Slice of Life Experiments TextWorld uses a grammar to generate similar games. Following BIBREF7, we use TextWorld's “home” theme to generate the games for the question-answering system. TextWorld is a framework that uses a grammar to randomly generate game worlds and quests. This framework also gives us information such as instructions on how to finish the quest, and a list of actions that can be performed at each step based on the current world state. We do not let our agent access this additional solution information or admissible actions list. Given the relatively small quest length for TextWorld games—games can be completed in as little as 5 steps—we generate 50 such games and partition them into train and test sets in a 4:1 ratio. The traces are generated on the training set, and the question-answering system is evaluated on the test set. We then pick a random game from the test set to train our source task deep Q-network for this domain. For this training, we use the reward function provided by TextWorld: +1 for each action taken that moves the agent closer to finishing the quest; -1 for each action taken that extends the minimum number of steps needed to finish the quest from the current stage; 0 for all other situations. We choose the game, 9:05 as our target task game due to similarities in structure in addition to the vocabulary overlap. Note that there are multiple possible endings to this game and we pick the simplest one for the purpose of training our agent. Experiments ::: Horror Experiments For the horror domain, we choose Lurking Horror to train the question-answering system on. The source and target task games are chosen as Afflicted and Anchorhead respectively. However, due to the size and complexity of these two games some modifications to the games are required for the agent to be able to effectively solve them. We partition each of these games and make them smaller by reducing the final goal of the game to an intermediate checkpoint leading to it. This checkpoints were identified manually using walkthroughs of the game; each game has a natural intermediate goal. For example, Anchorhead is segmented into 3 chapters in the form of objectives spread across 3 days, of which we use only the first chapter. The exact details of the games after partitioning is described in Table TABREF7. For Lurking Horror, we report numbers relevant for the oracle walkthrough. We then pre-prune the action space and use only the actions that are relevant for the sections of the game that we have partitioned out. The majority of the environment is still available for the agent to explore but the game ends upon completion of the chosen intermediate checkpoint. Experiments ::: Reward Augmentation The combined state-action space for a commercial text-adventure game is quite large and the corresponding reward function is very sparse in comparison. The default, implied reward signal is to receive positive value upon completion of the game, and no reward value elsewhere. This is problematic from an experimentation perspective as text-adventure games are too complex for even state-of-the-art deep reinforcement learning agents to complete. Even using transfer learning methods, a sparse reward signal usually results in ineffective exploration by the agent. To make experimentation feasible, we augment the reward to give the agent a dense reward signal. Specifically, we use an oracle to generate state-action traces (identical to how as when training the question-answering system). An oracle is an agent that is capable of playing and finishing a game perfectly in the least number of steps possible. The state-action pairs generated using perfect walkthroughs of the game are then used as checkpoints and used to give the agent additional reward. If the agent encounters any of these state-action pairs when training, i.e. performs the right action given a corresponding state, it receives a proportional reward in addition to the standard reward built into the game. This reward is scaled based on the game and is designed to be less than the smallest reward given by the original reward function to prevent it from overpowering the built-in reward. We refer to agents using this technique as having “dense” reward and “sparse” reward otherwise. The agent otherwise receives no information from the oracle about how to win the game. Results/Discussion The structure of the experiments are such that the for each of the domains, the target task game is more complex that the source task game. The slice of life games are also generally less complex than the horror games; they have a simpler vocabulary and a more linear quest structure. Additionally, given the nature of interactive fiction games, it is nearly impossible—even for human players—to achieve completion in the minimum number of steps (as given by the steps to completion in Table TABREF7); each of these games are puzzle based and require extensive exploration and interaction with various objects in the environment to complete. Table TABREF19 and Table TABREF20 show results for the slice of life and horror domains, respectively. In both domains seeding and QA pre-training improve performance by similar amounts from the baseline on both the source and target task games. A series of t-tests comparing the results of the pre-training and graph seeding with the baseline KG-DQN show that all results are significant with $p<0.05$. Both the pre-training and graph seeding perform similar functions in enabling the agent to explore more effectively while picking high utility actions. Even when untuned, i.e. evaluating the agent on the target task after having only trained on the source task, the agent shows better performance than training on the target task from scratch using the sparse reward. As expected, we see a further gain in performance when the dense reward function is used for both of these domains as well. In the horror domain, the agent fails to converge to a state where it is capable of finishing the game without the dense reward function due to the horror games being more complex. When an agent is trained using on just the target task horror game, Anchorhead, it does not converge to completion and only gets as far as achieving a reward of approximately 7 (max. observed reward from the best model is 41). This corresponds to a point in the game where the player is required to use a term in an action that the player has never observed before, “look up Verlac” when in front of a certain file cabinet—“Verlac“ being the unknown entity. Without seeding or QA pre-training, the agent is unable to cut down the action space enough to effectively explore and find the solution to progress further. The relative effectiveness of the gains in initial reward due to seeding appears to depend on the game and the corresponding static text document. In all situations except Anchohead, seeding provides comparable gains in initial reward as compared to QA — there is no statistical difference between the two when performing similar t-tests. When the full system is used—i.e. we seed the knowledge graph, pre-train QA, then train the source task game, then the target task game using the augmented reward function—we see a significant gain in performance, up to an 80% gain in terms of completion steps in some cases. The bottleneck at reward 7 is still difficult to pass, however, as seen in Fig. FIGREF18, in which we can see that the agent spends a relatively long time around this reward level unless the full transfer technique is used. We further see in Figures FIGREF17, FIGREF18 that transferring knowledge results in the agent learning this higher quality policy much faster. In fact, we note that training a full system is more efficient than just training the agent on a single task, i.e. training a QA system then a source task game for 50 episodes then transferring and training a seeded target task game for 50 episodes is more effective than just training the target task game by itself for even 150+ episodes. Conclusions We have demonstrated that using knowledge graphs as a state representation enables efficient transfer between deep reinforcement learning agents designed to play text-adventure games, reducing training times and increasing the quality of the learned control policy. Our results show that we are able to extract a graph from a general static text resource and use that to give the agent knowledge regarding domain specific vocabulary, object affordances, etc. Additionally, we demonstrate that we can effectively transfer knowledge using deep Q-network parameter weights, either by pre-training portions of the network using a question-answering system or by transferring parameters from a source to a target game. Our agent trains faster overall, including the number of episodes required to pre-train and train on a source task, and performs up to 80% better on convergence than an agent not utilizing these techniques. We conclude that knowledge graphs enable transfer in deep reinforcement learning agents by providing the agent with a more explicit–and interpretable–mapping between the state and action spaces of different games. This mapping helps overcome the challenges twin challenges of partial observability and combinatorially large action spaces inherent in all text-adventure games by allowing the agent to better explore the state-action space. Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. IIS-1350339. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
the knowledge graph is used to prune this space by ranking actions based on their presence in the current knowledge graph and the relations between the objects in the graph as in BIBREF7
71fca845edd33f6e227eccde10db73b99a7e157b
71fca845edd33f6e227eccde10db73b99a7e157b_0
Q: What was the baseline? Text: Introduction Opinion Mining and Sentiment Analysis (OMSA) are crucial for determining opinion trends and attitudes about commercial products, companies reputation management, brand monitoring, or to track attitudes by mining social media, etc. Furthermore, given the explosion of information produced and shared via the Internet, especially in social media, it is simply not possible to keep up with the constant flow of new information by manual methods. Early approaches to OMSA were based on document classification, where the task was to determine the polarity (positive, negative, neutral) of a given document or review BIBREF0 , BIBREF1 . A well known benchmark for polarity classification at document level is that of BIBREF2 . Later on, a finer-grained OMSA was deemed necessary. This was motivated by the fact that in a given review more than one opinion about a variety of aspects or attributes of a given product is usually conveyed. Thus, Aspect Based Sentiment Analysis (ABSA) was defined as a task which consisted of identifying several components of a given opinion: the opinion holder, the target, the opinion expression (the textual expression conveying polarity) and the aspects or features. Aspects are mostly domain-dependent. In restaurant reviews, relevant aspects would include “food quality”, “price”, “service”, “restaurant ambience”, etc. Similarly, if the reviews were about consumer electronics such as laptops, then aspects would include “size”, “battery life”, “hard drive capacity”, etc. In the review shown by Figure FIGREF1 there are three different opinions about two different aspects (categories) of the restaurant, namely, the first two opinions are about the quality of the food and the third one about the general ambience of the place. Furthermore, there are just two opinion targets because the target of the third opinion, the restaurant itself, remains implicit. Finally, each aspect is assigned a polarity; in this case all three opinion aspects are negative. In this work we focus on Opinion Target Extraction, which we model as a sequence labelling task. In order to do so, we convert an annotated review such as the one in Figure FIGREF1 into the BIO scheme for learning sequence labelling models BIBREF3 . Example (1) shows the review in BIO format. Tokens in the review are tagged depending on whether they are at the beginning (B-target), inside (I-target) or outside (O) of the opinion target expression. Note that the third opinion target in Figure FIGREF1 is implicit. We learn language independent models which consist of a set of local, shallow features complemented with semantic distributional features based on clusters obtained from a variety of data sources. We show that our approach, despite the lack of hand-engineered, language-specific features, obtains state-of-the-art results in 7 datasets for 6 languages on the ABSA benchmarks BIBREF4 , BIBREF5 , BIBREF6 . The main contribution of this research note is providing an extension or addendum to previous work on sequence labelling BIBREF7 by reporting additional experimental results as well as further insights on the performance of our model across languages on a different NLP task such as Opinion Target Extraction (OTE). Thus, we empirically demonstrate the validity and strong performance of our approach for six languages in seven different datasets of the restaurant domain. Every experiment and result presented in this note is novel. In this sense, we show that our approach is not only competitive across languages and domains for Named Entity Recognition, as shown by BIBREF7 , but that it can be straightforwardly adapted to different tasks and domains such as OTE. Furthermore, we release the system and every model trained for public use and to facilitate reproducibility of results. Background Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised and deep learning models. To the best of our knowledge, the first work on OTE was published by BIBREF8 . They created a new task which consisted of generating overviews of the main product features from a collection of customer reviews on consumer electronics. They addressed such task using an unsupervised algorithm based on association mining. Other early unsupervised approaches include BIBREF9 which used a dependency parser to obtain more opinion targets, and BIBREF10 which aimed at extracting opinion targets in newswire via Semantic Role Labelling. From a supervised perspective, BIBREF11 presented an approach which learned the opinion target candidates and a combination of dependency and part-of-speech (POS) paths connecting such pairs. Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing. Closer to our work, BIBREF13 , BIBREF14 and BIBREF15 approached OTE as a sequence labelling task, modelling the opinion targets using the BIO scheme. The first approach implemented HMM whereas the last two proposed CRFs to solve the problem. In all three cases, their systems included extensive human-designed and linguistically motivated features, such as POS tags, lemmas, dependencies, constituent parsing structure, lexical patterns and semantic features extracted from WordNet BIBREF16 . Quite frequently these works used a third party dataset, or a subset of the original one, or created their own annotated data for their experiments. The result was that it was difficult to draw precise conclusions about the advantages or disadvantages of the proposed methods. In this context, the Aspect Based Sentiment Analysis (ABSA) tasks at SemEval BIBREF4 , BIBREF5 , BIBREF6 provided standard training and evaluation data thereby helping to establish a clear benchmark for the OTE task. Finally, it should be noted that there is a closely related task, namely, the SemEval 2016 task on Stance Detection. Stance detection is related to ABSA, but there is a significant difference. In ABSA the task is to determine whether a piece of text is positive, negative, or neutral with respect to an aspect and a given target (which in Stance Detection is called “author's favorability” towards a given target). However, in Stance Detection the text may express opinion or sentiment about some other target, not mentioned in the given text, and the targets are predefined, whereas in ABSA the targets are open-ended. ABSA Tasks at SemEval Three ABSA editions were held within the SemEval Evaluation Exercises between 2014 and 2016. The ABSA 2014 and 2015 tasks consisted of English reviews only, whereas in the 2016 task 7 more languages were added. Additionally, reviews from four domains were collected for the various sub-tasks across the three editions, namely, Consumer Electronics, Telecommunications, Museums and Restaurant reviews. In any case, the only constant in each of the ABSA editions was the inclusion, for the Opinion Target Extraction (OTE) sub-task, of restaurant reviews for every language. Thus, for the experiments presented in this paper we decided to focus on the restaurant domain across 6 languages and the three different ABSA editions. Similarly, this section will be focused on reviewing the OTE results for the restaurant domain. The ABSA task consisted of identifying, for each opinion, the opinion target, the aspect referred to by the opinion and the aspect's polarity. Figure FIGREF1 illustrates the original annotation of a restaurant review in the ABSA 2016 dataset. It should be noted that, out of the three opinion components, only the targets are explicitly represented in the text, which means that OTE can be independently modelled as a sequence labelling problem as shown by Example (1). It is particularly important to notice that the opinion expressions (“dry”, “greasy”, “loud and rude”) are not annotated. Following previous approaches, the first competitive systems for OTE at ABSA were supervised. Among the participants (for English) in the three editions, one team BIBREF17 , BIBREF18 was particularly successful. For ABSA 2014 and 2015 they developed a CRF system with extensive handcrafted linguistic features: POS, head word, dependency relations, WordNet relations, gazetteers and Name Lists based on applying the Double Propagation algorithm BIBREF12 on an initial list of 551 seeds. Interestingly, they also introduced word representation features based on Brown and K-mean clusters. For ABSA 2016, they improved their system by using the output of a Recurrent Neural Network (RNN) to provide additional features. The RNN is trained on the following input features: word embeddings, Name Lists and word clusters BIBREF19 . They were the best system in 2014 and 2016. In 2015 they obtained the second best result, in which the best system, a preliminary version of the one presented in this note, was submitted by the EliXa team BIBREF20 . From 2015 onwards most works have been based on deep learning. BIBREF21 applied RNNs on top of a variety of pre-trained word embeddings, while BIBREF22 presented an architecture in which a RNN based tagger is stacked on top of the features generated by a Convolutional Neural Network (CNN). These systems were evaluated on the 2014 and 2015 datasets, respectively, but they did not go beyond the state-of-the-art. BIBREF23 presented a 7 layer deep CNN combining word embeddings trained on a INLINEFORM0 5 billion word corpus extracted from Amazon BIBREF24 , POS tag features and manually developed linguistic patterns based on syntactic analysis and SenticNet BIBREF25 a concept-level knowledge based build for Sentiment Analysis applications. They only evaluate their system on the English 2014 ABSA data, obtaining best results up to date on that benchmark. More recently, BIBREF26 proposed a coupled multi-layer attention (CMLA) network where each layer consists of a couple of attentions with tensor operators. Unlike previous approaches, their system does not use complex linguistic-based features designed for one specific language. However, whereas previous successful approaches modelled OTE as an independent task, in the CMLA model the attentions interactively learn both the opinion targets and the opinion expressions. As opinion expressions are not available in the original ABSA datasets, they had to manually annotate the ABSA training and testing data with the required opinion expressions. Although BIBREF26 did not release the datasets with the annotated opinion expressions, Figure FIGREF5 illustrates what these annotations would look like. Thus, two new attributes (pfrom and pto) annotate the opinion expressions for each of the three opinions (“dry”, “greasy” and “loud and rude”, respectively). Using this new manual information to train their CMLA network they reported the best results so far for ABSA 2014 and 2015 (English only). Finally, BIBREF27 develop a multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations. As BIBREF26 , they use opinion expressions annotations for a joint modelling of opinion targets and expressions. However, unlike BIBREF26 they do not manually annotate the opinion expressions. Instead they manually add sentiment lexicons and rules based on dependency parsing in order to find the opinion words required to train their system. Using this hand-engineered system, they report state of the art results only for English on the ABSA 2016 dataset. They do not provide evaluation results on the 2014 and 2015 restaurant datasets. With respect to other languages, the IIT-T team presented systems for 4 out of the 7 languages in ABSA 2016, obtaining the best score for French and Dutch, second in Spanish but with very poor results for English, well below the baseline. The GTI team BIBREF28 implemented a CRF system using POS, lemmas and bigrams as features. They obtained the best result for Spanish and rather modest results for English. Summarizing, the most successful systems for OTE have been based on supervised approaches with rather elaborate, complex and linguistically inspired features. BIBREF23 obtains best results on the ABSA 2014 data by means of a CNN with word embeddings trained on 5 billion words from Amazon, POS features, manual patterns based on syntactic analysis and SenticNet. More recently, the CMLA deep learning model has established new state-of-the-art results for the 2015 dataset, whereas BIBREF27 provide the state of the art for the 2016 benchmark. Thus, there is not currently a multilingual system that obtains competitive results across (at least) several of the languages included in ABSA. As usual, most of the work has been done for English, with the large majority of the previous systems providing results only for one of the three English ABSA editions and without exploring the multilingual aspect. This could be due to the complex and language-specific systems that performed best for English BIBREF23 , or perhaps because the CMLA approach of BIBREF26 would require, in addition to the opinion targets, the gold standard annotations of the opinion expressions for each of the 6 languages other than English in the ABSA datasets. Methodology The work presented in this research note requires the following resources: (i) Aspect Based Sentiment Analysis (ABSA) data for training and testing; (ii) large unlabelled corpora to obtain semantic distributional features from clustering lexicons; and (iii) a sequence labelling system. In this section we will describe each of the resources used. ABSA Datasets Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one. Additionally, we think it is also interesting to note the low number of targets that are multiwords. To provide a couple of examples, for Spanish only the %35.59 of the targets are multiwords whereas for Dutch the percentage goes down to %25.68. If we compare these numbers with the CoNLL 2002 data for Named Entity Recognition (NER), a classic sequence labelling task, we find that in the ABSA data there is less than half the number of multiword targets than the number of multiword entities that can be found in the CoNLL Spanish and Dutch data (%35.59 vs %74.33 for Spanish and %25.68 vs %44.96 for Dutch). Unlabelled Corpora Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range. In order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 . The number of words used for each dataset, language and cluster type are described in Table TABREF9 . For example, the first row reads “Yelp Academic Dataset containing 225M words was used; after pre-processing, 156M words were taken to induce Brown clusters, whereas Clark and Word2vec clusters were trained on the whole corpus”. As explained in BIBREF7 , we pre-process the corpus before training Brown clusters, resulting in a smaller dataset than the original. Additionally, due to efficiency reasons, when the corpus is too large we use the pre-processed version to induce the Clark clusters. System We use the sequence labeller implemented within IXA pipes BIBREF7 . It learns supervised models based on the Perceptron algorithm BIBREF31 . To avoid duplication of efforts, it uses the Apache OpenNLP project implementation customized with its own features. By design, the sequence labeller aims to establish a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations and/or cascading errors across annotations. The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm. The clustering features look for the cluster class of the incoming token in one or more of the clustering lexicons induced following the three methods listed above. If found, then the class is added as feature (“not found” otherwise). As we work on a 5 token window, for each token and clustering lexicon at least 5 features are generated. For Brown, the number of features generated depend on the number of nodes found in the path for each token and clustering lexicon used. Figure FIGREF13 depicts how our system relates, via clusters, unseen words with those words that have been seen as targets during the training process. Thus, the tokens `french-onions' and `salmon' would be annotated as opinion targets because they occur in the same clusters as seen words which in the training data are labeled as targets. The word representation features are combined and stacked using the clustering lexicons induced over the different data sources listed in Table TABREF9 . In other words, stacking means adding various clustering features of the same type obtained from different data sources (for example, using clusters trained on Yelp and on Wikipedia); combining refers to combining different types of clustering features obtained from the same data source (e.g., using features from Brown and Clark clustering lexicons). To choose the best combination of clustering features we tried, via 5-fold cross validation on the training set, every possible permutation of the available Clark and Word2vec clustering lexicons obtained from the data sources. Once the best combination of Clark and Word2vec clustering lexicons per data source was found, we tried to combine them with the Brown clusters. The result is a rather simple but very competitive system that has proven to be highly successful in the most popular Named Entity Recognition and Classification (NER) benchmarks, both in out-of-domain and in-domain evaluations. Furthermore, it was demonstrated that the system also performed robustly across languages without any language-specific tuning. Details of the system's implementation, including detailed description of the local and clustering features, can be found in BIBREF7 , including a section on how to combine the clustering features. A preliminary version of this system BIBREF20 was the winner of the OTE sub-task in the ABSA 2015 edition (English only). In the next section we show that this system obtains state-of-the-art results not only across domains and languages for NER, but also for other tasks such as Opinion Target Extraction. The results reported are obtained using the official ABSA evaluation scripts BIBREF4 , BIBREF5 , BIBREF6 . Experimental Results In this section we report on the experiments performed using the system and data described above. First we will present the English results for the three ABSA editions as well as a comparison with previous work. After that we will do the same for 5 additional languages included in the ABSA 2016 edition: Dutch, French, Russian, Spanish and Turkish. The local and clustering features, as described in Section SECREF11 , are the same for every language and evaluation setting. The only change is the clustering lexicons used for the different languages. As stated in section SECREF11 , the best cluster combination is chosen via 5-fold cross validation (CV) on the training data. We first try every permutation with the Clark and Word2vec clusters. Once the best combination is obtained, we then try with the Brown clusters obtaining thus the final model for each language and dataset. English Table TABREF16 provides detailed results on the Opinion Target Extraction (OTE) task for English. We show in bold our best model (ALL) chosen via 5-fold CV on the training data. Moreover, we also show the results of the best models using only one type of clustering feature, namely, the best Brown, Clark and Word2vec models, respectively. The first noteworthy issue is that the same model obtains the best results on the three English datasets. Second, it is also interesting to note the huge gains obtained by the clustering features, between 6-7 points in F1 score across the three ABSA datasets. Third, the results show that the combination of clustering features induced from different data sources is crucial. Fourth, the clustering features improve the recall by 12-15 points in the 2015 and 2016 data, and around 7 points for 2014. Finally, while in 2014 the precision also increases, in the 2015 setting it degrades almost by 4 points in F1 score. Table TABREF17 compares our results with previous work. MIN refers to the multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations with manually developed rules for detecting opinion expressions BIBREF27 . CNN-SenticNet is the 7 layer CNN with Amazon word embeddings, POS, linguistic rules based on syntax patterns and SenticNet BIBREF23 . LSTM is a Long Short Term Memory neural network built on top of word embeddings as proposed by BIBREF21 . WDEmb BIBREF35 uses word and dependency path, linear context and dependency context embedding features the input to a CRF. RNCRF is a joint model with CRF and a recursive neural network whereas CMLA is the Coupled Multilayer Attentions model described in section SECREF4 , both systems proposed by BIBREF26 . DLIREC-NLANGP is the winning system at ABSA 2014 and 2016 BIBREF17 , BIBREF18 , BIBREF19 while the penultimate row refers to our own system for all the three benchmarks (details in Table TABREF16 ). The results of Table TABREF17 show that our system, despite its simplicity, is highly competitive, obtaining the best results on the 2015 and 2016 datasets and a competitive performance on the 2014 benchmark. In particular, we outperform much more complex and language-specific approaches tuned via language-specific features, such as that of DLIREC-NLANGP. Furthermore, while the deep learning approaches (enriched with human-engineered linguistic features) obtain comparable or better results on the 2014 data, that is not the case for the 2015 and 2016 benchmarks, where our system outperforms also the MIN and CMLA models (systems which require manually added rules and gold-standard opinion expressions to obtain their best results, as explained in section SECREF4 ). In this sense, this means that our system obtains better results than MIN and CMLA by learning the targets independently instead of jointly learning the target and those expressions that convey the polarity of the opinion, namely, the opinion expression. There seems to be also a correlation between the size of the datasets and performance, given that the results on the 2014 data are much higher than those obtained using the 2015 and 2016 datasets. This might be due to the fact that the 2014 training set is substantially larger, as detailed in Table TABREF7 . In fact, the smaller datasets seem to affect more the deep learning approaches (LSTM, WDEmb, RNCRF) where only the MIN and CMLA models obtain similar results to ours, albeit using manually added language-specific annotations. Finally, it would have been interesting to compare MIN, CNN-SenticNet and CMLA with our system on the three ABSA benchmarks, but their systems are not publicly available. Multilingual We trained our system for 5 other languages on the ABSA 2016 datasets, using the same strategy as for English. We choose the best Clark-Word2vec combination (with and without Brown clusters) via 5-cross validation on the training data. The features are exactly the same as those used for English, the only change is the data on which the clusters are trained. Table TABREF19 reports on the detailed results obtained for each of the languages. In bold we show the best model chosen via 5-fold CV. Moreover, we also show the best models using only one of each of the clustering features. The first difference with respect to the English results is that the Brown clustering features are, in three out of five settings, detrimental to performance. Second, that combining clustering features is only beneficial for Spanish. Third, the overall results are in general lower than those obtained in the 2016 English data. Finally, the difference between the best results and the results using the Local features is lower than for English, even though the Local results are similar to those obtained with the English datasets (except for Turkish, but this is due to the significantly smaller size of the data, as shown in Table TABREF7 ). We believe that all these four issues are caused, at least partially, by the lack of domain-specific clustering features used for the multilingual experiments. In other words, while for the English experiments we leveraged the Yelp dataset to train the clustering algorithms, in the multilingual setting we first tried with already available clusters induced from the Wikipedia. Thus, it is to be expected that the gains obtained by clustering features obtained from domain-specific data such as Yelp would be superior to those achieved by the clusters trained on out-of-domain data. In spite of this, Table TABREF20 shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish BIBREF28 . Discussion and Error Analysis Considering the simplicity of our approach, we obtain best results for 6 languages and 7 different settings in the Opinion Target Extraction (OTE) benchmark for the restaurant domain using the ABSA 2014-2016 datasets. These results are obtained without linguistic or manually-engineered features, relying on injecting external knowledge from the combination of clustering features to obtain a robust system across languages, outperforming other more complex and language-specific systems. Furthermore, the feature set used is the same for every setting, reducing human intervention to a minimum and establishing a clear methodology for a fast and easy creation of competitive OTE multilingual taggers. The results also confirm the behaviour of these clustering algorithms to provide features for sequence labelling tasks such as OTE and Named Entity Recognition (NER), as previously discussed in BIBREF7 . Thus, in every evaluation setting the best results using Brown clusters as features were obtained when data close to the application domain and text genre, even if relatively small, was used to train the Brown algorithm. This can be clearly seen if we compare the English with the multilingual results. For English, the models including Brown clusters improve the Local features over 3-5 points in F1 score, whereas for Spanish, Dutch and Russian, they worsen performance. The reason is that for English the Yelp dataset is used whereas for the rest of languages the clusters are induced using the Wikipedia, effectively an out-of-domain corpus. The exception is Turkish, for which a 6 point gain in F1 score is obtained, but we believe that is probably due to the small size of the training data used for training the Local model. In contrast, Word2vec clusters clearly benefit from larger amounts of data, as illustrated by the best English Word2vec model being the one trained using the Wikipedia, and not the Yelp dataset, which is closer to the application domain. Finally, the Clark algorithm seems to be the most versatile as it consistently outperforms the other two clustering methods in 4 out of the 8 evaluation settings presented. Summarizing: (i) Brown clusters perform better when leveraged from source data close to the application domain, even if small in size; (ii) Clark clusters are the most robust of the three with respect to the size and domain of the data used; and (iii) for Word2vec size is the crucial factor. The larger the source data the better the performance. Thus, instead of choosing over one clustering type or the other, our system provides a method to effectively combining them, depending on the data sources available, to obtain robust and language independent sequence labelling systems. Finally, results show that our models are particularly competitive when the amount of training data available is small, allowing us to compete with more complex systems including also manually-engineered features, as shown especially by the English results on the 2015 and 2016 data. Error Analysis We will now discuss the shortcomings and most common errors performed by our system for the OTE task. By looking at the overall results in terms of precision and recall, it is possible to see the following patterns: With respect to the Local models, precision is consistently better than recall or, in other words, the coverage of the Local models is quite low. Tables TABREF16 and TABREF19 show that adding clustering features to the Local models allows to improve the recall for every evaluation setting, although with different outcomes. Overall, precision suffers, except for French. Furthermore, in three cases (English 2014, 2016 and Russian) precision is lower than recall, whereas the remaining 5 evaluations show that, despite large improvements in F1 score, most errors in our system are caused by false negatives, as it can be seen in Table TABREF23 . Table TABREF25 displays the top 5 most common false positives and false negative errors for English, Spanish and French. By inspecting our system's output, and both the test and training sets, we found out that there were three main sources of errors: (a) errors caused by ambiguity in the use of certain source forms that may or may not refer to an opinion target; (b) span errors, where the target has only been partially annotated; and (c) unknown targets, which the system was unable to annotate by generalizing on the training data or clusters. With respect to type (a), it is useful to look at the most common errors for all three languages, namely, `place', `food' and `restaurant', which are also among the top 5 most frequent targets in the gold standard sets. By looking at Examples (1-3) we would say that in all three cases `place' should be annotated as opinion target. However, (2) is a false positive (FP), (3) is a false negative (FN) and (1) is an example from the training set in which `place' is annotated as target. This is the case with many instances of `place' for which there seems to be some inconsistency in the actual annotation of the training and test set examples. Example (1): Avoid this place! Example (2): this place is a keeper! Example (3): it is great place to watch sporting events. For other frequent type (a) errors, ambiguity is the main problem. Thus, in Spanish the use of `comida' and `restaurante' is highly ambiguous and causes many FPs and FNs because sometimes it is actually an opinion target whereas in many other other cases it is just referring to the meal or the restaurant themselves without expressing any opinion about them. The same phenomenon occurs for “food” and “restaurant” in English and for `cuisine' and `restaurant' in French. Span type (b) errors are typically caused by long opinion targets such as “filet mignon on top of spinach and mashed potatoes” for which our system annotates “filet” and “spinach” as separate targets, or “chicken curry and chicken tikka masala” which is wrongly tagged as one target. These cases are difficult because on the surface they look similar but the first one refers to one dish only, hence one target, whereas the second one refers to two separate dishes for which two different opinion targets should be annotated. Of course, these cases are particularly hurtful because they count as both FP and FN. Finally, type (c) errors are usually caused by lack of generalization of our system to deal with unknown targets. Example (4-7) contain various mentions to the “Ray's” restaurant, which is in the top 5 errors for the English 2016 test set. Example (4): After 12 years in Seattle Ray's rates as the place we always go back to. Example (5): We were only in Seattle for one night and I'm so glad we picked Rays for dinner! Example (6): I love Dungeness crabs and at Ray's you can get them served in about 6 different ways! Example (7): Imagine my happy surprise upon finding that the views are only the third-best thing about Ray's! Example (8): Ray's is something of a Seattle institution Examples (4), (5) and (7) are FNs, (6) is a FP caused by wrongly identifying the target as “Ray's you”, whereas (8) is not event annotated in the gold standard or by our system, although it should had been. Concluding Remarks In this research note we provide additional empirical experimentation to BIBREF7 , reporting best results for Opinion Target Extraction for 6 languages and 7 datasets using the same set of simple, shallow and language independent features. Furthermore, the results provide some interesting insights with respect to the use of clusters to inject external knowledge via semi-supervised features. First, Brown clusters are particularly beneficial when trained on domain-related data. This seems to be the case in the multilingual setting, where the Brown clusters (trained on out-of-domain Wikipedia data) worsen the system's performance for every language except for Turkish. Second, the results also show that Clark and Word2vec improve results in general, even if induced on out-of-domain data. Thirdly, for best performance it is convenient to combine clusters obtained from diverse data sources, both from in- and out-of-domain corpora. Finally, the results indicate that, even when the amount of training data is small, such as in the 2015 and 2016 English benchmarks, our system's performance remains competitive thanks to the combination of clustering features. This, together with the lack of linguistic features, facilitates the easy and fast development of systems for new domains or languages. These considerations thus confirm the hypotheses stated in BIBREF7 with respect to the use of clustering features to obtain robust sequence taggers across languages and tasks. The system and models for every language and dataset are available as part of the ixa-pipe-opinion module for public use and reproducibility of results. Acknowledgments First, we would like to thank the anonymous reviewers for their comments to improve the paper. We would also like to thank Iñaki San Vicente for his help obtaining the Yelp data. This work has been supported by the Spanish Ministry of Economy and Competitiveness (MINECO/FEDER, UE), under the projects TUNER (TIN2015-65308-C5-1-R) and CROSSTEXT (TIN2015-72646-EXP).
the baseline provided by BIBREF8, the baselines provided by the ABSA organizers
93b299acfb6fad104b9ebf4d0585d42de4047051
93b299acfb6fad104b9ebf4d0585d42de4047051_0
Q: Which datasets are used? Text: Introduction Opinion Mining and Sentiment Analysis (OMSA) are crucial for determining opinion trends and attitudes about commercial products, companies reputation management, brand monitoring, or to track attitudes by mining social media, etc. Furthermore, given the explosion of information produced and shared via the Internet, especially in social media, it is simply not possible to keep up with the constant flow of new information by manual methods. Early approaches to OMSA were based on document classification, where the task was to determine the polarity (positive, negative, neutral) of a given document or review BIBREF0 , BIBREF1 . A well known benchmark for polarity classification at document level is that of BIBREF2 . Later on, a finer-grained OMSA was deemed necessary. This was motivated by the fact that in a given review more than one opinion about a variety of aspects or attributes of a given product is usually conveyed. Thus, Aspect Based Sentiment Analysis (ABSA) was defined as a task which consisted of identifying several components of a given opinion: the opinion holder, the target, the opinion expression (the textual expression conveying polarity) and the aspects or features. Aspects are mostly domain-dependent. In restaurant reviews, relevant aspects would include “food quality”, “price”, “service”, “restaurant ambience”, etc. Similarly, if the reviews were about consumer electronics such as laptops, then aspects would include “size”, “battery life”, “hard drive capacity”, etc. In the review shown by Figure FIGREF1 there are three different opinions about two different aspects (categories) of the restaurant, namely, the first two opinions are about the quality of the food and the third one about the general ambience of the place. Furthermore, there are just two opinion targets because the target of the third opinion, the restaurant itself, remains implicit. Finally, each aspect is assigned a polarity; in this case all three opinion aspects are negative. In this work we focus on Opinion Target Extraction, which we model as a sequence labelling task. In order to do so, we convert an annotated review such as the one in Figure FIGREF1 into the BIO scheme for learning sequence labelling models BIBREF3 . Example (1) shows the review in BIO format. Tokens in the review are tagged depending on whether they are at the beginning (B-target), inside (I-target) or outside (O) of the opinion target expression. Note that the third opinion target in Figure FIGREF1 is implicit. We learn language independent models which consist of a set of local, shallow features complemented with semantic distributional features based on clusters obtained from a variety of data sources. We show that our approach, despite the lack of hand-engineered, language-specific features, obtains state-of-the-art results in 7 datasets for 6 languages on the ABSA benchmarks BIBREF4 , BIBREF5 , BIBREF6 . The main contribution of this research note is providing an extension or addendum to previous work on sequence labelling BIBREF7 by reporting additional experimental results as well as further insights on the performance of our model across languages on a different NLP task such as Opinion Target Extraction (OTE). Thus, we empirically demonstrate the validity and strong performance of our approach for six languages in seven different datasets of the restaurant domain. Every experiment and result presented in this note is novel. In this sense, we show that our approach is not only competitive across languages and domains for Named Entity Recognition, as shown by BIBREF7 , but that it can be straightforwardly adapted to different tasks and domains such as OTE. Furthermore, we release the system and every model trained for public use and to facilitate reproducibility of results. Background Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised and deep learning models. To the best of our knowledge, the first work on OTE was published by BIBREF8 . They created a new task which consisted of generating overviews of the main product features from a collection of customer reviews on consumer electronics. They addressed such task using an unsupervised algorithm based on association mining. Other early unsupervised approaches include BIBREF9 which used a dependency parser to obtain more opinion targets, and BIBREF10 which aimed at extracting opinion targets in newswire via Semantic Role Labelling. From a supervised perspective, BIBREF11 presented an approach which learned the opinion target candidates and a combination of dependency and part-of-speech (POS) paths connecting such pairs. Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing. Closer to our work, BIBREF13 , BIBREF14 and BIBREF15 approached OTE as a sequence labelling task, modelling the opinion targets using the BIO scheme. The first approach implemented HMM whereas the last two proposed CRFs to solve the problem. In all three cases, their systems included extensive human-designed and linguistically motivated features, such as POS tags, lemmas, dependencies, constituent parsing structure, lexical patterns and semantic features extracted from WordNet BIBREF16 . Quite frequently these works used a third party dataset, or a subset of the original one, or created their own annotated data for their experiments. The result was that it was difficult to draw precise conclusions about the advantages or disadvantages of the proposed methods. In this context, the Aspect Based Sentiment Analysis (ABSA) tasks at SemEval BIBREF4 , BIBREF5 , BIBREF6 provided standard training and evaluation data thereby helping to establish a clear benchmark for the OTE task. Finally, it should be noted that there is a closely related task, namely, the SemEval 2016 task on Stance Detection. Stance detection is related to ABSA, but there is a significant difference. In ABSA the task is to determine whether a piece of text is positive, negative, or neutral with respect to an aspect and a given target (which in Stance Detection is called “author's favorability” towards a given target). However, in Stance Detection the text may express opinion or sentiment about some other target, not mentioned in the given text, and the targets are predefined, whereas in ABSA the targets are open-ended. ABSA Tasks at SemEval Three ABSA editions were held within the SemEval Evaluation Exercises between 2014 and 2016. The ABSA 2014 and 2015 tasks consisted of English reviews only, whereas in the 2016 task 7 more languages were added. Additionally, reviews from four domains were collected for the various sub-tasks across the three editions, namely, Consumer Electronics, Telecommunications, Museums and Restaurant reviews. In any case, the only constant in each of the ABSA editions was the inclusion, for the Opinion Target Extraction (OTE) sub-task, of restaurant reviews for every language. Thus, for the experiments presented in this paper we decided to focus on the restaurant domain across 6 languages and the three different ABSA editions. Similarly, this section will be focused on reviewing the OTE results for the restaurant domain. The ABSA task consisted of identifying, for each opinion, the opinion target, the aspect referred to by the opinion and the aspect's polarity. Figure FIGREF1 illustrates the original annotation of a restaurant review in the ABSA 2016 dataset. It should be noted that, out of the three opinion components, only the targets are explicitly represented in the text, which means that OTE can be independently modelled as a sequence labelling problem as shown by Example (1). It is particularly important to notice that the opinion expressions (“dry”, “greasy”, “loud and rude”) are not annotated. Following previous approaches, the first competitive systems for OTE at ABSA were supervised. Among the participants (for English) in the three editions, one team BIBREF17 , BIBREF18 was particularly successful. For ABSA 2014 and 2015 they developed a CRF system with extensive handcrafted linguistic features: POS, head word, dependency relations, WordNet relations, gazetteers and Name Lists based on applying the Double Propagation algorithm BIBREF12 on an initial list of 551 seeds. Interestingly, they also introduced word representation features based on Brown and K-mean clusters. For ABSA 2016, they improved their system by using the output of a Recurrent Neural Network (RNN) to provide additional features. The RNN is trained on the following input features: word embeddings, Name Lists and word clusters BIBREF19 . They were the best system in 2014 and 2016. In 2015 they obtained the second best result, in which the best system, a preliminary version of the one presented in this note, was submitted by the EliXa team BIBREF20 . From 2015 onwards most works have been based on deep learning. BIBREF21 applied RNNs on top of a variety of pre-trained word embeddings, while BIBREF22 presented an architecture in which a RNN based tagger is stacked on top of the features generated by a Convolutional Neural Network (CNN). These systems were evaluated on the 2014 and 2015 datasets, respectively, but they did not go beyond the state-of-the-art. BIBREF23 presented a 7 layer deep CNN combining word embeddings trained on a INLINEFORM0 5 billion word corpus extracted from Amazon BIBREF24 , POS tag features and manually developed linguistic patterns based on syntactic analysis and SenticNet BIBREF25 a concept-level knowledge based build for Sentiment Analysis applications. They only evaluate their system on the English 2014 ABSA data, obtaining best results up to date on that benchmark. More recently, BIBREF26 proposed a coupled multi-layer attention (CMLA) network where each layer consists of a couple of attentions with tensor operators. Unlike previous approaches, their system does not use complex linguistic-based features designed for one specific language. However, whereas previous successful approaches modelled OTE as an independent task, in the CMLA model the attentions interactively learn both the opinion targets and the opinion expressions. As opinion expressions are not available in the original ABSA datasets, they had to manually annotate the ABSA training and testing data with the required opinion expressions. Although BIBREF26 did not release the datasets with the annotated opinion expressions, Figure FIGREF5 illustrates what these annotations would look like. Thus, two new attributes (pfrom and pto) annotate the opinion expressions for each of the three opinions (“dry”, “greasy” and “loud and rude”, respectively). Using this new manual information to train their CMLA network they reported the best results so far for ABSA 2014 and 2015 (English only). Finally, BIBREF27 develop a multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations. As BIBREF26 , they use opinion expressions annotations for a joint modelling of opinion targets and expressions. However, unlike BIBREF26 they do not manually annotate the opinion expressions. Instead they manually add sentiment lexicons and rules based on dependency parsing in order to find the opinion words required to train their system. Using this hand-engineered system, they report state of the art results only for English on the ABSA 2016 dataset. They do not provide evaluation results on the 2014 and 2015 restaurant datasets. With respect to other languages, the IIT-T team presented systems for 4 out of the 7 languages in ABSA 2016, obtaining the best score for French and Dutch, second in Spanish but with very poor results for English, well below the baseline. The GTI team BIBREF28 implemented a CRF system using POS, lemmas and bigrams as features. They obtained the best result for Spanish and rather modest results for English. Summarizing, the most successful systems for OTE have been based on supervised approaches with rather elaborate, complex and linguistically inspired features. BIBREF23 obtains best results on the ABSA 2014 data by means of a CNN with word embeddings trained on 5 billion words from Amazon, POS features, manual patterns based on syntactic analysis and SenticNet. More recently, the CMLA deep learning model has established new state-of-the-art results for the 2015 dataset, whereas BIBREF27 provide the state of the art for the 2016 benchmark. Thus, there is not currently a multilingual system that obtains competitive results across (at least) several of the languages included in ABSA. As usual, most of the work has been done for English, with the large majority of the previous systems providing results only for one of the three English ABSA editions and without exploring the multilingual aspect. This could be due to the complex and language-specific systems that performed best for English BIBREF23 , or perhaps because the CMLA approach of BIBREF26 would require, in addition to the opinion targets, the gold standard annotations of the opinion expressions for each of the 6 languages other than English in the ABSA datasets. Methodology The work presented in this research note requires the following resources: (i) Aspect Based Sentiment Analysis (ABSA) data for training and testing; (ii) large unlabelled corpora to obtain semantic distributional features from clustering lexicons; and (iii) a sequence labelling system. In this section we will describe each of the resources used. ABSA Datasets Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one. Additionally, we think it is also interesting to note the low number of targets that are multiwords. To provide a couple of examples, for Spanish only the %35.59 of the targets are multiwords whereas for Dutch the percentage goes down to %25.68. If we compare these numbers with the CoNLL 2002 data for Named Entity Recognition (NER), a classic sequence labelling task, we find that in the ABSA data there is less than half the number of multiword targets than the number of multiword entities that can be found in the CoNLL Spanish and Dutch data (%35.59 vs %74.33 for Spanish and %25.68 vs %44.96 for Dutch). Unlabelled Corpora Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range. In order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 . The number of words used for each dataset, language and cluster type are described in Table TABREF9 . For example, the first row reads “Yelp Academic Dataset containing 225M words was used; after pre-processing, 156M words were taken to induce Brown clusters, whereas Clark and Word2vec clusters were trained on the whole corpus”. As explained in BIBREF7 , we pre-process the corpus before training Brown clusters, resulting in a smaller dataset than the original. Additionally, due to efficiency reasons, when the corpus is too large we use the pre-processed version to induce the Clark clusters. System We use the sequence labeller implemented within IXA pipes BIBREF7 . It learns supervised models based on the Perceptron algorithm BIBREF31 . To avoid duplication of efforts, it uses the Apache OpenNLP project implementation customized with its own features. By design, the sequence labeller aims to establish a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations and/or cascading errors across annotations. The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm. The clustering features look for the cluster class of the incoming token in one or more of the clustering lexicons induced following the three methods listed above. If found, then the class is added as feature (“not found” otherwise). As we work on a 5 token window, for each token and clustering lexicon at least 5 features are generated. For Brown, the number of features generated depend on the number of nodes found in the path for each token and clustering lexicon used. Figure FIGREF13 depicts how our system relates, via clusters, unseen words with those words that have been seen as targets during the training process. Thus, the tokens `french-onions' and `salmon' would be annotated as opinion targets because they occur in the same clusters as seen words which in the training data are labeled as targets. The word representation features are combined and stacked using the clustering lexicons induced over the different data sources listed in Table TABREF9 . In other words, stacking means adding various clustering features of the same type obtained from different data sources (for example, using clusters trained on Yelp and on Wikipedia); combining refers to combining different types of clustering features obtained from the same data source (e.g., using features from Brown and Clark clustering lexicons). To choose the best combination of clustering features we tried, via 5-fold cross validation on the training set, every possible permutation of the available Clark and Word2vec clustering lexicons obtained from the data sources. Once the best combination of Clark and Word2vec clustering lexicons per data source was found, we tried to combine them with the Brown clusters. The result is a rather simple but very competitive system that has proven to be highly successful in the most popular Named Entity Recognition and Classification (NER) benchmarks, both in out-of-domain and in-domain evaluations. Furthermore, it was demonstrated that the system also performed robustly across languages without any language-specific tuning. Details of the system's implementation, including detailed description of the local and clustering features, can be found in BIBREF7 , including a section on how to combine the clustering features. A preliminary version of this system BIBREF20 was the winner of the OTE sub-task in the ABSA 2015 edition (English only). In the next section we show that this system obtains state-of-the-art results not only across domains and languages for NER, but also for other tasks such as Opinion Target Extraction. The results reported are obtained using the official ABSA evaluation scripts BIBREF4 , BIBREF5 , BIBREF6 . Experimental Results In this section we report on the experiments performed using the system and data described above. First we will present the English results for the three ABSA editions as well as a comparison with previous work. After that we will do the same for 5 additional languages included in the ABSA 2016 edition: Dutch, French, Russian, Spanish and Turkish. The local and clustering features, as described in Section SECREF11 , are the same for every language and evaluation setting. The only change is the clustering lexicons used for the different languages. As stated in section SECREF11 , the best cluster combination is chosen via 5-fold cross validation (CV) on the training data. We first try every permutation with the Clark and Word2vec clusters. Once the best combination is obtained, we then try with the Brown clusters obtaining thus the final model for each language and dataset. English Table TABREF16 provides detailed results on the Opinion Target Extraction (OTE) task for English. We show in bold our best model (ALL) chosen via 5-fold CV on the training data. Moreover, we also show the results of the best models using only one type of clustering feature, namely, the best Brown, Clark and Word2vec models, respectively. The first noteworthy issue is that the same model obtains the best results on the three English datasets. Second, it is also interesting to note the huge gains obtained by the clustering features, between 6-7 points in F1 score across the three ABSA datasets. Third, the results show that the combination of clustering features induced from different data sources is crucial. Fourth, the clustering features improve the recall by 12-15 points in the 2015 and 2016 data, and around 7 points for 2014. Finally, while in 2014 the precision also increases, in the 2015 setting it degrades almost by 4 points in F1 score. Table TABREF17 compares our results with previous work. MIN refers to the multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations with manually developed rules for detecting opinion expressions BIBREF27 . CNN-SenticNet is the 7 layer CNN with Amazon word embeddings, POS, linguistic rules based on syntax patterns and SenticNet BIBREF23 . LSTM is a Long Short Term Memory neural network built on top of word embeddings as proposed by BIBREF21 . WDEmb BIBREF35 uses word and dependency path, linear context and dependency context embedding features the input to a CRF. RNCRF is a joint model with CRF and a recursive neural network whereas CMLA is the Coupled Multilayer Attentions model described in section SECREF4 , both systems proposed by BIBREF26 . DLIREC-NLANGP is the winning system at ABSA 2014 and 2016 BIBREF17 , BIBREF18 , BIBREF19 while the penultimate row refers to our own system for all the three benchmarks (details in Table TABREF16 ). The results of Table TABREF17 show that our system, despite its simplicity, is highly competitive, obtaining the best results on the 2015 and 2016 datasets and a competitive performance on the 2014 benchmark. In particular, we outperform much more complex and language-specific approaches tuned via language-specific features, such as that of DLIREC-NLANGP. Furthermore, while the deep learning approaches (enriched with human-engineered linguistic features) obtain comparable or better results on the 2014 data, that is not the case for the 2015 and 2016 benchmarks, where our system outperforms also the MIN and CMLA models (systems which require manually added rules and gold-standard opinion expressions to obtain their best results, as explained in section SECREF4 ). In this sense, this means that our system obtains better results than MIN and CMLA by learning the targets independently instead of jointly learning the target and those expressions that convey the polarity of the opinion, namely, the opinion expression. There seems to be also a correlation between the size of the datasets and performance, given that the results on the 2014 data are much higher than those obtained using the 2015 and 2016 datasets. This might be due to the fact that the 2014 training set is substantially larger, as detailed in Table TABREF7 . In fact, the smaller datasets seem to affect more the deep learning approaches (LSTM, WDEmb, RNCRF) where only the MIN and CMLA models obtain similar results to ours, albeit using manually added language-specific annotations. Finally, it would have been interesting to compare MIN, CNN-SenticNet and CMLA with our system on the three ABSA benchmarks, but their systems are not publicly available. Multilingual We trained our system for 5 other languages on the ABSA 2016 datasets, using the same strategy as for English. We choose the best Clark-Word2vec combination (with and without Brown clusters) via 5-cross validation on the training data. The features are exactly the same as those used for English, the only change is the data on which the clusters are trained. Table TABREF19 reports on the detailed results obtained for each of the languages. In bold we show the best model chosen via 5-fold CV. Moreover, we also show the best models using only one of each of the clustering features. The first difference with respect to the English results is that the Brown clustering features are, in three out of five settings, detrimental to performance. Second, that combining clustering features is only beneficial for Spanish. Third, the overall results are in general lower than those obtained in the 2016 English data. Finally, the difference between the best results and the results using the Local features is lower than for English, even though the Local results are similar to those obtained with the English datasets (except for Turkish, but this is due to the significantly smaller size of the data, as shown in Table TABREF7 ). We believe that all these four issues are caused, at least partially, by the lack of domain-specific clustering features used for the multilingual experiments. In other words, while for the English experiments we leveraged the Yelp dataset to train the clustering algorithms, in the multilingual setting we first tried with already available clusters induced from the Wikipedia. Thus, it is to be expected that the gains obtained by clustering features obtained from domain-specific data such as Yelp would be superior to those achieved by the clusters trained on out-of-domain data. In spite of this, Table TABREF20 shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish BIBREF28 . Discussion and Error Analysis Considering the simplicity of our approach, we obtain best results for 6 languages and 7 different settings in the Opinion Target Extraction (OTE) benchmark for the restaurant domain using the ABSA 2014-2016 datasets. These results are obtained without linguistic or manually-engineered features, relying on injecting external knowledge from the combination of clustering features to obtain a robust system across languages, outperforming other more complex and language-specific systems. Furthermore, the feature set used is the same for every setting, reducing human intervention to a minimum and establishing a clear methodology for a fast and easy creation of competitive OTE multilingual taggers. The results also confirm the behaviour of these clustering algorithms to provide features for sequence labelling tasks such as OTE and Named Entity Recognition (NER), as previously discussed in BIBREF7 . Thus, in every evaluation setting the best results using Brown clusters as features were obtained when data close to the application domain and text genre, even if relatively small, was used to train the Brown algorithm. This can be clearly seen if we compare the English with the multilingual results. For English, the models including Brown clusters improve the Local features over 3-5 points in F1 score, whereas for Spanish, Dutch and Russian, they worsen performance. The reason is that for English the Yelp dataset is used whereas for the rest of languages the clusters are induced using the Wikipedia, effectively an out-of-domain corpus. The exception is Turkish, for which a 6 point gain in F1 score is obtained, but we believe that is probably due to the small size of the training data used for training the Local model. In contrast, Word2vec clusters clearly benefit from larger amounts of data, as illustrated by the best English Word2vec model being the one trained using the Wikipedia, and not the Yelp dataset, which is closer to the application domain. Finally, the Clark algorithm seems to be the most versatile as it consistently outperforms the other two clustering methods in 4 out of the 8 evaluation settings presented. Summarizing: (i) Brown clusters perform better when leveraged from source data close to the application domain, even if small in size; (ii) Clark clusters are the most robust of the three with respect to the size and domain of the data used; and (iii) for Word2vec size is the crucial factor. The larger the source data the better the performance. Thus, instead of choosing over one clustering type or the other, our system provides a method to effectively combining them, depending on the data sources available, to obtain robust and language independent sequence labelling systems. Finally, results show that our models are particularly competitive when the amount of training data available is small, allowing us to compete with more complex systems including also manually-engineered features, as shown especially by the English results on the 2015 and 2016 data. Error Analysis We will now discuss the shortcomings and most common errors performed by our system for the OTE task. By looking at the overall results in terms of precision and recall, it is possible to see the following patterns: With respect to the Local models, precision is consistently better than recall or, in other words, the coverage of the Local models is quite low. Tables TABREF16 and TABREF19 show that adding clustering features to the Local models allows to improve the recall for every evaluation setting, although with different outcomes. Overall, precision suffers, except for French. Furthermore, in three cases (English 2014, 2016 and Russian) precision is lower than recall, whereas the remaining 5 evaluations show that, despite large improvements in F1 score, most errors in our system are caused by false negatives, as it can be seen in Table TABREF23 . Table TABREF25 displays the top 5 most common false positives and false negative errors for English, Spanish and French. By inspecting our system's output, and both the test and training sets, we found out that there were three main sources of errors: (a) errors caused by ambiguity in the use of certain source forms that may or may not refer to an opinion target; (b) span errors, where the target has only been partially annotated; and (c) unknown targets, which the system was unable to annotate by generalizing on the training data or clusters. With respect to type (a), it is useful to look at the most common errors for all three languages, namely, `place', `food' and `restaurant', which are also among the top 5 most frequent targets in the gold standard sets. By looking at Examples (1-3) we would say that in all three cases `place' should be annotated as opinion target. However, (2) is a false positive (FP), (3) is a false negative (FN) and (1) is an example from the training set in which `place' is annotated as target. This is the case with many instances of `place' for which there seems to be some inconsistency in the actual annotation of the training and test set examples. Example (1): Avoid this place! Example (2): this place is a keeper! Example (3): it is great place to watch sporting events. For other frequent type (a) errors, ambiguity is the main problem. Thus, in Spanish the use of `comida' and `restaurante' is highly ambiguous and causes many FPs and FNs because sometimes it is actually an opinion target whereas in many other other cases it is just referring to the meal or the restaurant themselves without expressing any opinion about them. The same phenomenon occurs for “food” and “restaurant” in English and for `cuisine' and `restaurant' in French. Span type (b) errors are typically caused by long opinion targets such as “filet mignon on top of spinach and mashed potatoes” for which our system annotates “filet” and “spinach” as separate targets, or “chicken curry and chicken tikka masala” which is wrongly tagged as one target. These cases are difficult because on the surface they look similar but the first one refers to one dish only, hence one target, whereas the second one refers to two separate dishes for which two different opinion targets should be annotated. Of course, these cases are particularly hurtful because they count as both FP and FN. Finally, type (c) errors are usually caused by lack of generalization of our system to deal with unknown targets. Example (4-7) contain various mentions to the “Ray's” restaurant, which is in the top 5 errors for the English 2016 test set. Example (4): After 12 years in Seattle Ray's rates as the place we always go back to. Example (5): We were only in Seattle for one night and I'm so glad we picked Rays for dinner! Example (6): I love Dungeness crabs and at Ray's you can get them served in about 6 different ways! Example (7): Imagine my happy surprise upon finding that the views are only the third-best thing about Ray's! Example (8): Ray's is something of a Seattle institution Examples (4), (5) and (7) are FNs, (6) is a FP caused by wrongly identifying the target as “Ray's you”, whereas (8) is not event annotated in the gold standard or by our system, although it should had been. Concluding Remarks In this research note we provide additional empirical experimentation to BIBREF7 , reporting best results for Opinion Target Extraction for 6 languages and 7 datasets using the same set of simple, shallow and language independent features. Furthermore, the results provide some interesting insights with respect to the use of clusters to inject external knowledge via semi-supervised features. First, Brown clusters are particularly beneficial when trained on domain-related data. This seems to be the case in the multilingual setting, where the Brown clusters (trained on out-of-domain Wikipedia data) worsen the system's performance for every language except for Turkish. Second, the results also show that Clark and Word2vec improve results in general, even if induced on out-of-domain data. Thirdly, for best performance it is convenient to combine clusters obtained from diverse data sources, both from in- and out-of-domain corpora. Finally, the results indicate that, even when the amount of training data is small, such as in the 2015 and 2016 English benchmarks, our system's performance remains competitive thanks to the combination of clustering features. This, together with the lack of linguistic features, facilitates the easy and fast development of systems for new domains or languages. These considerations thus confirm the hypotheses stated in BIBREF7 with respect to the use of clustering features to obtain robust sequence taggers across languages and tasks. The system and models for every language and dataset are available as part of the ixa-pipe-opinion module for public use and reproducibility of results. Acknowledgments First, we would like to thank the anonymous reviewers for their comments to improve the paper. We would also like to thank Iñaki San Vicente for his help obtaining the Yelp data. This work has been supported by the Spanish Ministry of Economy and Competitiveness (MINECO/FEDER, UE), under the projects TUNER (TIN2015-65308-C5-1-R) and CROSSTEXT (TIN2015-72646-EXP).
ABSA SemEval 2014-2016 datasets Yelp Academic Dataset Wikipedia dumps
e755fb599690d0d0c12ddb851ac731a0a7965797
e755fb599690d0d0c12ddb851ac731a0a7965797_0
Q: Which six languages are experimented with? Text: Introduction Opinion Mining and Sentiment Analysis (OMSA) are crucial for determining opinion trends and attitudes about commercial products, companies reputation management, brand monitoring, or to track attitudes by mining social media, etc. Furthermore, given the explosion of information produced and shared via the Internet, especially in social media, it is simply not possible to keep up with the constant flow of new information by manual methods. Early approaches to OMSA were based on document classification, where the task was to determine the polarity (positive, negative, neutral) of a given document or review BIBREF0 , BIBREF1 . A well known benchmark for polarity classification at document level is that of BIBREF2 . Later on, a finer-grained OMSA was deemed necessary. This was motivated by the fact that in a given review more than one opinion about a variety of aspects or attributes of a given product is usually conveyed. Thus, Aspect Based Sentiment Analysis (ABSA) was defined as a task which consisted of identifying several components of a given opinion: the opinion holder, the target, the opinion expression (the textual expression conveying polarity) and the aspects or features. Aspects are mostly domain-dependent. In restaurant reviews, relevant aspects would include “food quality”, “price”, “service”, “restaurant ambience”, etc. Similarly, if the reviews were about consumer electronics such as laptops, then aspects would include “size”, “battery life”, “hard drive capacity”, etc. In the review shown by Figure FIGREF1 there are three different opinions about two different aspects (categories) of the restaurant, namely, the first two opinions are about the quality of the food and the third one about the general ambience of the place. Furthermore, there are just two opinion targets because the target of the third opinion, the restaurant itself, remains implicit. Finally, each aspect is assigned a polarity; in this case all three opinion aspects are negative. In this work we focus on Opinion Target Extraction, which we model as a sequence labelling task. In order to do so, we convert an annotated review such as the one in Figure FIGREF1 into the BIO scheme for learning sequence labelling models BIBREF3 . Example (1) shows the review in BIO format. Tokens in the review are tagged depending on whether they are at the beginning (B-target), inside (I-target) or outside (O) of the opinion target expression. Note that the third opinion target in Figure FIGREF1 is implicit. We learn language independent models which consist of a set of local, shallow features complemented with semantic distributional features based on clusters obtained from a variety of data sources. We show that our approach, despite the lack of hand-engineered, language-specific features, obtains state-of-the-art results in 7 datasets for 6 languages on the ABSA benchmarks BIBREF4 , BIBREF5 , BIBREF6 . The main contribution of this research note is providing an extension or addendum to previous work on sequence labelling BIBREF7 by reporting additional experimental results as well as further insights on the performance of our model across languages on a different NLP task such as Opinion Target Extraction (OTE). Thus, we empirically demonstrate the validity and strong performance of our approach for six languages in seven different datasets of the restaurant domain. Every experiment and result presented in this note is novel. In this sense, we show that our approach is not only competitive across languages and domains for Named Entity Recognition, as shown by BIBREF7 , but that it can be straightforwardly adapted to different tasks and domains such as OTE. Furthermore, we release the system and every model trained for public use and to facilitate reproducibility of results. Background Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised and deep learning models. To the best of our knowledge, the first work on OTE was published by BIBREF8 . They created a new task which consisted of generating overviews of the main product features from a collection of customer reviews on consumer electronics. They addressed such task using an unsupervised algorithm based on association mining. Other early unsupervised approaches include BIBREF9 which used a dependency parser to obtain more opinion targets, and BIBREF10 which aimed at extracting opinion targets in newswire via Semantic Role Labelling. From a supervised perspective, BIBREF11 presented an approach which learned the opinion target candidates and a combination of dependency and part-of-speech (POS) paths connecting such pairs. Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing. Closer to our work, BIBREF13 , BIBREF14 and BIBREF15 approached OTE as a sequence labelling task, modelling the opinion targets using the BIO scheme. The first approach implemented HMM whereas the last two proposed CRFs to solve the problem. In all three cases, their systems included extensive human-designed and linguistically motivated features, such as POS tags, lemmas, dependencies, constituent parsing structure, lexical patterns and semantic features extracted from WordNet BIBREF16 . Quite frequently these works used a third party dataset, or a subset of the original one, or created their own annotated data for their experiments. The result was that it was difficult to draw precise conclusions about the advantages or disadvantages of the proposed methods. In this context, the Aspect Based Sentiment Analysis (ABSA) tasks at SemEval BIBREF4 , BIBREF5 , BIBREF6 provided standard training and evaluation data thereby helping to establish a clear benchmark for the OTE task. Finally, it should be noted that there is a closely related task, namely, the SemEval 2016 task on Stance Detection. Stance detection is related to ABSA, but there is a significant difference. In ABSA the task is to determine whether a piece of text is positive, negative, or neutral with respect to an aspect and a given target (which in Stance Detection is called “author's favorability” towards a given target). However, in Stance Detection the text may express opinion or sentiment about some other target, not mentioned in the given text, and the targets are predefined, whereas in ABSA the targets are open-ended. ABSA Tasks at SemEval Three ABSA editions were held within the SemEval Evaluation Exercises between 2014 and 2016. The ABSA 2014 and 2015 tasks consisted of English reviews only, whereas in the 2016 task 7 more languages were added. Additionally, reviews from four domains were collected for the various sub-tasks across the three editions, namely, Consumer Electronics, Telecommunications, Museums and Restaurant reviews. In any case, the only constant in each of the ABSA editions was the inclusion, for the Opinion Target Extraction (OTE) sub-task, of restaurant reviews for every language. Thus, for the experiments presented in this paper we decided to focus on the restaurant domain across 6 languages and the three different ABSA editions. Similarly, this section will be focused on reviewing the OTE results for the restaurant domain. The ABSA task consisted of identifying, for each opinion, the opinion target, the aspect referred to by the opinion and the aspect's polarity. Figure FIGREF1 illustrates the original annotation of a restaurant review in the ABSA 2016 dataset. It should be noted that, out of the three opinion components, only the targets are explicitly represented in the text, which means that OTE can be independently modelled as a sequence labelling problem as shown by Example (1). It is particularly important to notice that the opinion expressions (“dry”, “greasy”, “loud and rude”) are not annotated. Following previous approaches, the first competitive systems for OTE at ABSA were supervised. Among the participants (for English) in the three editions, one team BIBREF17 , BIBREF18 was particularly successful. For ABSA 2014 and 2015 they developed a CRF system with extensive handcrafted linguistic features: POS, head word, dependency relations, WordNet relations, gazetteers and Name Lists based on applying the Double Propagation algorithm BIBREF12 on an initial list of 551 seeds. Interestingly, they also introduced word representation features based on Brown and K-mean clusters. For ABSA 2016, they improved their system by using the output of a Recurrent Neural Network (RNN) to provide additional features. The RNN is trained on the following input features: word embeddings, Name Lists and word clusters BIBREF19 . They were the best system in 2014 and 2016. In 2015 they obtained the second best result, in which the best system, a preliminary version of the one presented in this note, was submitted by the EliXa team BIBREF20 . From 2015 onwards most works have been based on deep learning. BIBREF21 applied RNNs on top of a variety of pre-trained word embeddings, while BIBREF22 presented an architecture in which a RNN based tagger is stacked on top of the features generated by a Convolutional Neural Network (CNN). These systems were evaluated on the 2014 and 2015 datasets, respectively, but they did not go beyond the state-of-the-art. BIBREF23 presented a 7 layer deep CNN combining word embeddings trained on a INLINEFORM0 5 billion word corpus extracted from Amazon BIBREF24 , POS tag features and manually developed linguistic patterns based on syntactic analysis and SenticNet BIBREF25 a concept-level knowledge based build for Sentiment Analysis applications. They only evaluate their system on the English 2014 ABSA data, obtaining best results up to date on that benchmark. More recently, BIBREF26 proposed a coupled multi-layer attention (CMLA) network where each layer consists of a couple of attentions with tensor operators. Unlike previous approaches, their system does not use complex linguistic-based features designed for one specific language. However, whereas previous successful approaches modelled OTE as an independent task, in the CMLA model the attentions interactively learn both the opinion targets and the opinion expressions. As opinion expressions are not available in the original ABSA datasets, they had to manually annotate the ABSA training and testing data with the required opinion expressions. Although BIBREF26 did not release the datasets with the annotated opinion expressions, Figure FIGREF5 illustrates what these annotations would look like. Thus, two new attributes (pfrom and pto) annotate the opinion expressions for each of the three opinions (“dry”, “greasy” and “loud and rude”, respectively). Using this new manual information to train their CMLA network they reported the best results so far for ABSA 2014 and 2015 (English only). Finally, BIBREF27 develop a multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations. As BIBREF26 , they use opinion expressions annotations for a joint modelling of opinion targets and expressions. However, unlike BIBREF26 they do not manually annotate the opinion expressions. Instead they manually add sentiment lexicons and rules based on dependency parsing in order to find the opinion words required to train their system. Using this hand-engineered system, they report state of the art results only for English on the ABSA 2016 dataset. They do not provide evaluation results on the 2014 and 2015 restaurant datasets. With respect to other languages, the IIT-T team presented systems for 4 out of the 7 languages in ABSA 2016, obtaining the best score for French and Dutch, second in Spanish but with very poor results for English, well below the baseline. The GTI team BIBREF28 implemented a CRF system using POS, lemmas and bigrams as features. They obtained the best result for Spanish and rather modest results for English. Summarizing, the most successful systems for OTE have been based on supervised approaches with rather elaborate, complex and linguistically inspired features. BIBREF23 obtains best results on the ABSA 2014 data by means of a CNN with word embeddings trained on 5 billion words from Amazon, POS features, manual patterns based on syntactic analysis and SenticNet. More recently, the CMLA deep learning model has established new state-of-the-art results for the 2015 dataset, whereas BIBREF27 provide the state of the art for the 2016 benchmark. Thus, there is not currently a multilingual system that obtains competitive results across (at least) several of the languages included in ABSA. As usual, most of the work has been done for English, with the large majority of the previous systems providing results only for one of the three English ABSA editions and without exploring the multilingual aspect. This could be due to the complex and language-specific systems that performed best for English BIBREF23 , or perhaps because the CMLA approach of BIBREF26 would require, in addition to the opinion targets, the gold standard annotations of the opinion expressions for each of the 6 languages other than English in the ABSA datasets. Methodology The work presented in this research note requires the following resources: (i) Aspect Based Sentiment Analysis (ABSA) data for training and testing; (ii) large unlabelled corpora to obtain semantic distributional features from clustering lexicons; and (iii) a sequence labelling system. In this section we will describe each of the resources used. ABSA Datasets Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one. Additionally, we think it is also interesting to note the low number of targets that are multiwords. To provide a couple of examples, for Spanish only the %35.59 of the targets are multiwords whereas for Dutch the percentage goes down to %25.68. If we compare these numbers with the CoNLL 2002 data for Named Entity Recognition (NER), a classic sequence labelling task, we find that in the ABSA data there is less than half the number of multiword targets than the number of multiword entities that can be found in the CoNLL Spanish and Dutch data (%35.59 vs %74.33 for Spanish and %25.68 vs %44.96 for Dutch). Unlabelled Corpora Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range. In order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 . The number of words used for each dataset, language and cluster type are described in Table TABREF9 . For example, the first row reads “Yelp Academic Dataset containing 225M words was used; after pre-processing, 156M words were taken to induce Brown clusters, whereas Clark and Word2vec clusters were trained on the whole corpus”. As explained in BIBREF7 , we pre-process the corpus before training Brown clusters, resulting in a smaller dataset than the original. Additionally, due to efficiency reasons, when the corpus is too large we use the pre-processed version to induce the Clark clusters. System We use the sequence labeller implemented within IXA pipes BIBREF7 . It learns supervised models based on the Perceptron algorithm BIBREF31 . To avoid duplication of efforts, it uses the Apache OpenNLP project implementation customized with its own features. By design, the sequence labeller aims to establish a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations and/or cascading errors across annotations. The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm. The clustering features look for the cluster class of the incoming token in one or more of the clustering lexicons induced following the three methods listed above. If found, then the class is added as feature (“not found” otherwise). As we work on a 5 token window, for each token and clustering lexicon at least 5 features are generated. For Brown, the number of features generated depend on the number of nodes found in the path for each token and clustering lexicon used. Figure FIGREF13 depicts how our system relates, via clusters, unseen words with those words that have been seen as targets during the training process. Thus, the tokens `french-onions' and `salmon' would be annotated as opinion targets because they occur in the same clusters as seen words which in the training data are labeled as targets. The word representation features are combined and stacked using the clustering lexicons induced over the different data sources listed in Table TABREF9 . In other words, stacking means adding various clustering features of the same type obtained from different data sources (for example, using clusters trained on Yelp and on Wikipedia); combining refers to combining different types of clustering features obtained from the same data source (e.g., using features from Brown and Clark clustering lexicons). To choose the best combination of clustering features we tried, via 5-fold cross validation on the training set, every possible permutation of the available Clark and Word2vec clustering lexicons obtained from the data sources. Once the best combination of Clark and Word2vec clustering lexicons per data source was found, we tried to combine them with the Brown clusters. The result is a rather simple but very competitive system that has proven to be highly successful in the most popular Named Entity Recognition and Classification (NER) benchmarks, both in out-of-domain and in-domain evaluations. Furthermore, it was demonstrated that the system also performed robustly across languages without any language-specific tuning. Details of the system's implementation, including detailed description of the local and clustering features, can be found in BIBREF7 , including a section on how to combine the clustering features. A preliminary version of this system BIBREF20 was the winner of the OTE sub-task in the ABSA 2015 edition (English only). In the next section we show that this system obtains state-of-the-art results not only across domains and languages for NER, but also for other tasks such as Opinion Target Extraction. The results reported are obtained using the official ABSA evaluation scripts BIBREF4 , BIBREF5 , BIBREF6 . Experimental Results In this section we report on the experiments performed using the system and data described above. First we will present the English results for the three ABSA editions as well as a comparison with previous work. After that we will do the same for 5 additional languages included in the ABSA 2016 edition: Dutch, French, Russian, Spanish and Turkish. The local and clustering features, as described in Section SECREF11 , are the same for every language and evaluation setting. The only change is the clustering lexicons used for the different languages. As stated in section SECREF11 , the best cluster combination is chosen via 5-fold cross validation (CV) on the training data. We first try every permutation with the Clark and Word2vec clusters. Once the best combination is obtained, we then try with the Brown clusters obtaining thus the final model for each language and dataset. English Table TABREF16 provides detailed results on the Opinion Target Extraction (OTE) task for English. We show in bold our best model (ALL) chosen via 5-fold CV on the training data. Moreover, we also show the results of the best models using only one type of clustering feature, namely, the best Brown, Clark and Word2vec models, respectively. The first noteworthy issue is that the same model obtains the best results on the three English datasets. Second, it is also interesting to note the huge gains obtained by the clustering features, between 6-7 points in F1 score across the three ABSA datasets. Third, the results show that the combination of clustering features induced from different data sources is crucial. Fourth, the clustering features improve the recall by 12-15 points in the 2015 and 2016 data, and around 7 points for 2014. Finally, while in 2014 the precision also increases, in the 2015 setting it degrades almost by 4 points in F1 score. Table TABREF17 compares our results with previous work. MIN refers to the multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations with manually developed rules for detecting opinion expressions BIBREF27 . CNN-SenticNet is the 7 layer CNN with Amazon word embeddings, POS, linguistic rules based on syntax patterns and SenticNet BIBREF23 . LSTM is a Long Short Term Memory neural network built on top of word embeddings as proposed by BIBREF21 . WDEmb BIBREF35 uses word and dependency path, linear context and dependency context embedding features the input to a CRF. RNCRF is a joint model with CRF and a recursive neural network whereas CMLA is the Coupled Multilayer Attentions model described in section SECREF4 , both systems proposed by BIBREF26 . DLIREC-NLANGP is the winning system at ABSA 2014 and 2016 BIBREF17 , BIBREF18 , BIBREF19 while the penultimate row refers to our own system for all the three benchmarks (details in Table TABREF16 ). The results of Table TABREF17 show that our system, despite its simplicity, is highly competitive, obtaining the best results on the 2015 and 2016 datasets and a competitive performance on the 2014 benchmark. In particular, we outperform much more complex and language-specific approaches tuned via language-specific features, such as that of DLIREC-NLANGP. Furthermore, while the deep learning approaches (enriched with human-engineered linguistic features) obtain comparable or better results on the 2014 data, that is not the case for the 2015 and 2016 benchmarks, where our system outperforms also the MIN and CMLA models (systems which require manually added rules and gold-standard opinion expressions to obtain their best results, as explained in section SECREF4 ). In this sense, this means that our system obtains better results than MIN and CMLA by learning the targets independently instead of jointly learning the target and those expressions that convey the polarity of the opinion, namely, the opinion expression. There seems to be also a correlation between the size of the datasets and performance, given that the results on the 2014 data are much higher than those obtained using the 2015 and 2016 datasets. This might be due to the fact that the 2014 training set is substantially larger, as detailed in Table TABREF7 . In fact, the smaller datasets seem to affect more the deep learning approaches (LSTM, WDEmb, RNCRF) where only the MIN and CMLA models obtain similar results to ours, albeit using manually added language-specific annotations. Finally, it would have been interesting to compare MIN, CNN-SenticNet and CMLA with our system on the three ABSA benchmarks, but their systems are not publicly available. Multilingual We trained our system for 5 other languages on the ABSA 2016 datasets, using the same strategy as for English. We choose the best Clark-Word2vec combination (with and without Brown clusters) via 5-cross validation on the training data. The features are exactly the same as those used for English, the only change is the data on which the clusters are trained. Table TABREF19 reports on the detailed results obtained for each of the languages. In bold we show the best model chosen via 5-fold CV. Moreover, we also show the best models using only one of each of the clustering features. The first difference with respect to the English results is that the Brown clustering features are, in three out of five settings, detrimental to performance. Second, that combining clustering features is only beneficial for Spanish. Third, the overall results are in general lower than those obtained in the 2016 English data. Finally, the difference between the best results and the results using the Local features is lower than for English, even though the Local results are similar to those obtained with the English datasets (except for Turkish, but this is due to the significantly smaller size of the data, as shown in Table TABREF7 ). We believe that all these four issues are caused, at least partially, by the lack of domain-specific clustering features used for the multilingual experiments. In other words, while for the English experiments we leveraged the Yelp dataset to train the clustering algorithms, in the multilingual setting we first tried with already available clusters induced from the Wikipedia. Thus, it is to be expected that the gains obtained by clustering features obtained from domain-specific data such as Yelp would be superior to those achieved by the clusters trained on out-of-domain data. In spite of this, Table TABREF20 shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish BIBREF28 . Discussion and Error Analysis Considering the simplicity of our approach, we obtain best results for 6 languages and 7 different settings in the Opinion Target Extraction (OTE) benchmark for the restaurant domain using the ABSA 2014-2016 datasets. These results are obtained without linguistic or manually-engineered features, relying on injecting external knowledge from the combination of clustering features to obtain a robust system across languages, outperforming other more complex and language-specific systems. Furthermore, the feature set used is the same for every setting, reducing human intervention to a minimum and establishing a clear methodology for a fast and easy creation of competitive OTE multilingual taggers. The results also confirm the behaviour of these clustering algorithms to provide features for sequence labelling tasks such as OTE and Named Entity Recognition (NER), as previously discussed in BIBREF7 . Thus, in every evaluation setting the best results using Brown clusters as features were obtained when data close to the application domain and text genre, even if relatively small, was used to train the Brown algorithm. This can be clearly seen if we compare the English with the multilingual results. For English, the models including Brown clusters improve the Local features over 3-5 points in F1 score, whereas for Spanish, Dutch and Russian, they worsen performance. The reason is that for English the Yelp dataset is used whereas for the rest of languages the clusters are induced using the Wikipedia, effectively an out-of-domain corpus. The exception is Turkish, for which a 6 point gain in F1 score is obtained, but we believe that is probably due to the small size of the training data used for training the Local model. In contrast, Word2vec clusters clearly benefit from larger amounts of data, as illustrated by the best English Word2vec model being the one trained using the Wikipedia, and not the Yelp dataset, which is closer to the application domain. Finally, the Clark algorithm seems to be the most versatile as it consistently outperforms the other two clustering methods in 4 out of the 8 evaluation settings presented. Summarizing: (i) Brown clusters perform better when leveraged from source data close to the application domain, even if small in size; (ii) Clark clusters are the most robust of the three with respect to the size and domain of the data used; and (iii) for Word2vec size is the crucial factor. The larger the source data the better the performance. Thus, instead of choosing over one clustering type or the other, our system provides a method to effectively combining them, depending on the data sources available, to obtain robust and language independent sequence labelling systems. Finally, results show that our models are particularly competitive when the amount of training data available is small, allowing us to compete with more complex systems including also manually-engineered features, as shown especially by the English results on the 2015 and 2016 data. Error Analysis We will now discuss the shortcomings and most common errors performed by our system for the OTE task. By looking at the overall results in terms of precision and recall, it is possible to see the following patterns: With respect to the Local models, precision is consistently better than recall or, in other words, the coverage of the Local models is quite low. Tables TABREF16 and TABREF19 show that adding clustering features to the Local models allows to improve the recall for every evaluation setting, although with different outcomes. Overall, precision suffers, except for French. Furthermore, in three cases (English 2014, 2016 and Russian) precision is lower than recall, whereas the remaining 5 evaluations show that, despite large improvements in F1 score, most errors in our system are caused by false negatives, as it can be seen in Table TABREF23 . Table TABREF25 displays the top 5 most common false positives and false negative errors for English, Spanish and French. By inspecting our system's output, and both the test and training sets, we found out that there were three main sources of errors: (a) errors caused by ambiguity in the use of certain source forms that may or may not refer to an opinion target; (b) span errors, where the target has only been partially annotated; and (c) unknown targets, which the system was unable to annotate by generalizing on the training data or clusters. With respect to type (a), it is useful to look at the most common errors for all three languages, namely, `place', `food' and `restaurant', which are also among the top 5 most frequent targets in the gold standard sets. By looking at Examples (1-3) we would say that in all three cases `place' should be annotated as opinion target. However, (2) is a false positive (FP), (3) is a false negative (FN) and (1) is an example from the training set in which `place' is annotated as target. This is the case with many instances of `place' for which there seems to be some inconsistency in the actual annotation of the training and test set examples. Example (1): Avoid this place! Example (2): this place is a keeper! Example (3): it is great place to watch sporting events. For other frequent type (a) errors, ambiguity is the main problem. Thus, in Spanish the use of `comida' and `restaurante' is highly ambiguous and causes many FPs and FNs because sometimes it is actually an opinion target whereas in many other other cases it is just referring to the meal or the restaurant themselves without expressing any opinion about them. The same phenomenon occurs for “food” and “restaurant” in English and for `cuisine' and `restaurant' in French. Span type (b) errors are typically caused by long opinion targets such as “filet mignon on top of spinach and mashed potatoes” for which our system annotates “filet” and “spinach” as separate targets, or “chicken curry and chicken tikka masala” which is wrongly tagged as one target. These cases are difficult because on the surface they look similar but the first one refers to one dish only, hence one target, whereas the second one refers to two separate dishes for which two different opinion targets should be annotated. Of course, these cases are particularly hurtful because they count as both FP and FN. Finally, type (c) errors are usually caused by lack of generalization of our system to deal with unknown targets. Example (4-7) contain various mentions to the “Ray's” restaurant, which is in the top 5 errors for the English 2016 test set. Example (4): After 12 years in Seattle Ray's rates as the place we always go back to. Example (5): We were only in Seattle for one night and I'm so glad we picked Rays for dinner! Example (6): I love Dungeness crabs and at Ray's you can get them served in about 6 different ways! Example (7): Imagine my happy surprise upon finding that the views are only the third-best thing about Ray's! Example (8): Ray's is something of a Seattle institution Examples (4), (5) and (7) are FNs, (6) is a FP caused by wrongly identifying the target as “Ray's you”, whereas (8) is not event annotated in the gold standard or by our system, although it should had been. Concluding Remarks In this research note we provide additional empirical experimentation to BIBREF7 , reporting best results for Opinion Target Extraction for 6 languages and 7 datasets using the same set of simple, shallow and language independent features. Furthermore, the results provide some interesting insights with respect to the use of clusters to inject external knowledge via semi-supervised features. First, Brown clusters are particularly beneficial when trained on domain-related data. This seems to be the case in the multilingual setting, where the Brown clusters (trained on out-of-domain Wikipedia data) worsen the system's performance for every language except for Turkish. Second, the results also show that Clark and Word2vec improve results in general, even if induced on out-of-domain data. Thirdly, for best performance it is convenient to combine clusters obtained from diverse data sources, both from in- and out-of-domain corpora. Finally, the results indicate that, even when the amount of training data is small, such as in the 2015 and 2016 English benchmarks, our system's performance remains competitive thanks to the combination of clustering features. This, together with the lack of linguistic features, facilitates the easy and fast development of systems for new domains or languages. These considerations thus confirm the hypotheses stated in BIBREF7 with respect to the use of clustering features to obtain robust sequence taggers across languages and tasks. The system and models for every language and dataset are available as part of the ixa-pipe-opinion module for public use and reproducibility of results. Acknowledgments First, we would like to thank the anonymous reviewers for their comments to improve the paper. We would also like to thank Iñaki San Vicente for his help obtaining the Yelp data. This work has been supported by the Spanish Ministry of Economy and Competitiveness (MINECO/FEDER, UE), under the projects TUNER (TIN2015-65308-C5-1-R) and CROSSTEXT (TIN2015-72646-EXP).
Dutch, French, Russian, Spanish , Turkish, English
7e51490a362135267e75b2817de3c38dfe846e21
7e51490a362135267e75b2817de3c38dfe846e21_0
Q: What shallow local features are extracted? Text: Introduction Opinion Mining and Sentiment Analysis (OMSA) are crucial for determining opinion trends and attitudes about commercial products, companies reputation management, brand monitoring, or to track attitudes by mining social media, etc. Furthermore, given the explosion of information produced and shared via the Internet, especially in social media, it is simply not possible to keep up with the constant flow of new information by manual methods. Early approaches to OMSA were based on document classification, where the task was to determine the polarity (positive, negative, neutral) of a given document or review BIBREF0 , BIBREF1 . A well known benchmark for polarity classification at document level is that of BIBREF2 . Later on, a finer-grained OMSA was deemed necessary. This was motivated by the fact that in a given review more than one opinion about a variety of aspects or attributes of a given product is usually conveyed. Thus, Aspect Based Sentiment Analysis (ABSA) was defined as a task which consisted of identifying several components of a given opinion: the opinion holder, the target, the opinion expression (the textual expression conveying polarity) and the aspects or features. Aspects are mostly domain-dependent. In restaurant reviews, relevant aspects would include “food quality”, “price”, “service”, “restaurant ambience”, etc. Similarly, if the reviews were about consumer electronics such as laptops, then aspects would include “size”, “battery life”, “hard drive capacity”, etc. In the review shown by Figure FIGREF1 there are three different opinions about two different aspects (categories) of the restaurant, namely, the first two opinions are about the quality of the food and the third one about the general ambience of the place. Furthermore, there are just two opinion targets because the target of the third opinion, the restaurant itself, remains implicit. Finally, each aspect is assigned a polarity; in this case all three opinion aspects are negative. In this work we focus on Opinion Target Extraction, which we model as a sequence labelling task. In order to do so, we convert an annotated review such as the one in Figure FIGREF1 into the BIO scheme for learning sequence labelling models BIBREF3 . Example (1) shows the review in BIO format. Tokens in the review are tagged depending on whether they are at the beginning (B-target), inside (I-target) or outside (O) of the opinion target expression. Note that the third opinion target in Figure FIGREF1 is implicit. We learn language independent models which consist of a set of local, shallow features complemented with semantic distributional features based on clusters obtained from a variety of data sources. We show that our approach, despite the lack of hand-engineered, language-specific features, obtains state-of-the-art results in 7 datasets for 6 languages on the ABSA benchmarks BIBREF4 , BIBREF5 , BIBREF6 . The main contribution of this research note is providing an extension or addendum to previous work on sequence labelling BIBREF7 by reporting additional experimental results as well as further insights on the performance of our model across languages on a different NLP task such as Opinion Target Extraction (OTE). Thus, we empirically demonstrate the validity and strong performance of our approach for six languages in seven different datasets of the restaurant domain. Every experiment and result presented in this note is novel. In this sense, we show that our approach is not only competitive across languages and domains for Named Entity Recognition, as shown by BIBREF7 , but that it can be straightforwardly adapted to different tasks and domains such as OTE. Furthermore, we release the system and every model trained for public use and to facilitate reproducibility of results. Background Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised and deep learning models. To the best of our knowledge, the first work on OTE was published by BIBREF8 . They created a new task which consisted of generating overviews of the main product features from a collection of customer reviews on consumer electronics. They addressed such task using an unsupervised algorithm based on association mining. Other early unsupervised approaches include BIBREF9 which used a dependency parser to obtain more opinion targets, and BIBREF10 which aimed at extracting opinion targets in newswire via Semantic Role Labelling. From a supervised perspective, BIBREF11 presented an approach which learned the opinion target candidates and a combination of dependency and part-of-speech (POS) paths connecting such pairs. Their results improved the baseline provided by BIBREF8 . Another influential work was BIBREF12 , an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing. Closer to our work, BIBREF13 , BIBREF14 and BIBREF15 approached OTE as a sequence labelling task, modelling the opinion targets using the BIO scheme. The first approach implemented HMM whereas the last two proposed CRFs to solve the problem. In all three cases, their systems included extensive human-designed and linguistically motivated features, such as POS tags, lemmas, dependencies, constituent parsing structure, lexical patterns and semantic features extracted from WordNet BIBREF16 . Quite frequently these works used a third party dataset, or a subset of the original one, or created their own annotated data for their experiments. The result was that it was difficult to draw precise conclusions about the advantages or disadvantages of the proposed methods. In this context, the Aspect Based Sentiment Analysis (ABSA) tasks at SemEval BIBREF4 , BIBREF5 , BIBREF6 provided standard training and evaluation data thereby helping to establish a clear benchmark for the OTE task. Finally, it should be noted that there is a closely related task, namely, the SemEval 2016 task on Stance Detection. Stance detection is related to ABSA, but there is a significant difference. In ABSA the task is to determine whether a piece of text is positive, negative, or neutral with respect to an aspect and a given target (which in Stance Detection is called “author's favorability” towards a given target). However, in Stance Detection the text may express opinion or sentiment about some other target, not mentioned in the given text, and the targets are predefined, whereas in ABSA the targets are open-ended. ABSA Tasks at SemEval Three ABSA editions were held within the SemEval Evaluation Exercises between 2014 and 2016. The ABSA 2014 and 2015 tasks consisted of English reviews only, whereas in the 2016 task 7 more languages were added. Additionally, reviews from four domains were collected for the various sub-tasks across the three editions, namely, Consumer Electronics, Telecommunications, Museums and Restaurant reviews. In any case, the only constant in each of the ABSA editions was the inclusion, for the Opinion Target Extraction (OTE) sub-task, of restaurant reviews for every language. Thus, for the experiments presented in this paper we decided to focus on the restaurant domain across 6 languages and the three different ABSA editions. Similarly, this section will be focused on reviewing the OTE results for the restaurant domain. The ABSA task consisted of identifying, for each opinion, the opinion target, the aspect referred to by the opinion and the aspect's polarity. Figure FIGREF1 illustrates the original annotation of a restaurant review in the ABSA 2016 dataset. It should be noted that, out of the three opinion components, only the targets are explicitly represented in the text, which means that OTE can be independently modelled as a sequence labelling problem as shown by Example (1). It is particularly important to notice that the opinion expressions (“dry”, “greasy”, “loud and rude”) are not annotated. Following previous approaches, the first competitive systems for OTE at ABSA were supervised. Among the participants (for English) in the three editions, one team BIBREF17 , BIBREF18 was particularly successful. For ABSA 2014 and 2015 they developed a CRF system with extensive handcrafted linguistic features: POS, head word, dependency relations, WordNet relations, gazetteers and Name Lists based on applying the Double Propagation algorithm BIBREF12 on an initial list of 551 seeds. Interestingly, they also introduced word representation features based on Brown and K-mean clusters. For ABSA 2016, they improved their system by using the output of a Recurrent Neural Network (RNN) to provide additional features. The RNN is trained on the following input features: word embeddings, Name Lists and word clusters BIBREF19 . They were the best system in 2014 and 2016. In 2015 they obtained the second best result, in which the best system, a preliminary version of the one presented in this note, was submitted by the EliXa team BIBREF20 . From 2015 onwards most works have been based on deep learning. BIBREF21 applied RNNs on top of a variety of pre-trained word embeddings, while BIBREF22 presented an architecture in which a RNN based tagger is stacked on top of the features generated by a Convolutional Neural Network (CNN). These systems were evaluated on the 2014 and 2015 datasets, respectively, but they did not go beyond the state-of-the-art. BIBREF23 presented a 7 layer deep CNN combining word embeddings trained on a INLINEFORM0 5 billion word corpus extracted from Amazon BIBREF24 , POS tag features and manually developed linguistic patterns based on syntactic analysis and SenticNet BIBREF25 a concept-level knowledge based build for Sentiment Analysis applications. They only evaluate their system on the English 2014 ABSA data, obtaining best results up to date on that benchmark. More recently, BIBREF26 proposed a coupled multi-layer attention (CMLA) network where each layer consists of a couple of attentions with tensor operators. Unlike previous approaches, their system does not use complex linguistic-based features designed for one specific language. However, whereas previous successful approaches modelled OTE as an independent task, in the CMLA model the attentions interactively learn both the opinion targets and the opinion expressions. As opinion expressions are not available in the original ABSA datasets, they had to manually annotate the ABSA training and testing data with the required opinion expressions. Although BIBREF26 did not release the datasets with the annotated opinion expressions, Figure FIGREF5 illustrates what these annotations would look like. Thus, two new attributes (pfrom and pto) annotate the opinion expressions for each of the three opinions (“dry”, “greasy” and “loud and rude”, respectively). Using this new manual information to train their CMLA network they reported the best results so far for ABSA 2014 and 2015 (English only). Finally, BIBREF27 develop a multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations. As BIBREF26 , they use opinion expressions annotations for a joint modelling of opinion targets and expressions. However, unlike BIBREF26 they do not manually annotate the opinion expressions. Instead they manually add sentiment lexicons and rules based on dependency parsing in order to find the opinion words required to train their system. Using this hand-engineered system, they report state of the art results only for English on the ABSA 2016 dataset. They do not provide evaluation results on the 2014 and 2015 restaurant datasets. With respect to other languages, the IIT-T team presented systems for 4 out of the 7 languages in ABSA 2016, obtaining the best score for French and Dutch, second in Spanish but with very poor results for English, well below the baseline. The GTI team BIBREF28 implemented a CRF system using POS, lemmas and bigrams as features. They obtained the best result for Spanish and rather modest results for English. Summarizing, the most successful systems for OTE have been based on supervised approaches with rather elaborate, complex and linguistically inspired features. BIBREF23 obtains best results on the ABSA 2014 data by means of a CNN with word embeddings trained on 5 billion words from Amazon, POS features, manual patterns based on syntactic analysis and SenticNet. More recently, the CMLA deep learning model has established new state-of-the-art results for the 2015 dataset, whereas BIBREF27 provide the state of the art for the 2016 benchmark. Thus, there is not currently a multilingual system that obtains competitive results across (at least) several of the languages included in ABSA. As usual, most of the work has been done for English, with the large majority of the previous systems providing results only for one of the three English ABSA editions and without exploring the multilingual aspect. This could be due to the complex and language-specific systems that performed best for English BIBREF23 , or perhaps because the CMLA approach of BIBREF26 would require, in addition to the opinion targets, the gold standard annotations of the opinion expressions for each of the 6 languages other than English in the ABSA datasets. Methodology The work presented in this research note requires the following resources: (i) Aspect Based Sentiment Analysis (ABSA) data for training and testing; (ii) large unlabelled corpora to obtain semantic distributional features from clustering lexicons; and (iii) a sequence labelling system. In this section we will describe each of the resources used. ABSA Datasets Table TABREF7 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one. Additionally, we think it is also interesting to note the low number of targets that are multiwords. To provide a couple of examples, for Spanish only the %35.59 of the targets are multiwords whereas for Dutch the percentage goes down to %25.68. If we compare these numbers with the CoNLL 2002 data for Named Entity Recognition (NER), a classic sequence labelling task, we find that in the ABSA data there is less than half the number of multiword targets than the number of multiword entities that can be found in the CoNLL Spanish and Dutch data (%35.59 vs %74.33 for Spanish and %25.68 vs %44.96 for Dutch). Unlabelled Corpora Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range. In order to induce clusters from the restaurant domain we used the Yelp Academic Dataset, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews BIBREF29 . Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This Yelp food dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels & Travel) from the Yelp food dataset to create the Yelp food-hotels subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF30 . The number of words used for each dataset, language and cluster type are described in Table TABREF9 . For example, the first row reads “Yelp Academic Dataset containing 225M words was used; after pre-processing, 156M words were taken to induce Brown clusters, whereas Clark and Word2vec clusters were trained on the whole corpus”. As explained in BIBREF7 , we pre-process the corpus before training Brown clusters, resulting in a smaller dataset than the original. Additionally, due to efficiency reasons, when the corpus is too large we use the pre-processed version to induce the Clark clusters. System We use the sequence labeller implemented within IXA pipes BIBREF7 . It learns supervised models based on the Perceptron algorithm BIBREF31 . To avoid duplication of efforts, it uses the Apache OpenNLP project implementation customized with its own features. By design, the sequence labeller aims to establish a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations and/or cascading errors across annotations. The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown BIBREF32 clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark BIBREF33 clusters and, (iii) Word2vec BIBREF34 clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm. The clustering features look for the cluster class of the incoming token in one or more of the clustering lexicons induced following the three methods listed above. If found, then the class is added as feature (“not found” otherwise). As we work on a 5 token window, for each token and clustering lexicon at least 5 features are generated. For Brown, the number of features generated depend on the number of nodes found in the path for each token and clustering lexicon used. Figure FIGREF13 depicts how our system relates, via clusters, unseen words with those words that have been seen as targets during the training process. Thus, the tokens `french-onions' and `salmon' would be annotated as opinion targets because they occur in the same clusters as seen words which in the training data are labeled as targets. The word representation features are combined and stacked using the clustering lexicons induced over the different data sources listed in Table TABREF9 . In other words, stacking means adding various clustering features of the same type obtained from different data sources (for example, using clusters trained on Yelp and on Wikipedia); combining refers to combining different types of clustering features obtained from the same data source (e.g., using features from Brown and Clark clustering lexicons). To choose the best combination of clustering features we tried, via 5-fold cross validation on the training set, every possible permutation of the available Clark and Word2vec clustering lexicons obtained from the data sources. Once the best combination of Clark and Word2vec clustering lexicons per data source was found, we tried to combine them with the Brown clusters. The result is a rather simple but very competitive system that has proven to be highly successful in the most popular Named Entity Recognition and Classification (NER) benchmarks, both in out-of-domain and in-domain evaluations. Furthermore, it was demonstrated that the system also performed robustly across languages without any language-specific tuning. Details of the system's implementation, including detailed description of the local and clustering features, can be found in BIBREF7 , including a section on how to combine the clustering features. A preliminary version of this system BIBREF20 was the winner of the OTE sub-task in the ABSA 2015 edition (English only). In the next section we show that this system obtains state-of-the-art results not only across domains and languages for NER, but also for other tasks such as Opinion Target Extraction. The results reported are obtained using the official ABSA evaluation scripts BIBREF4 , BIBREF5 , BIBREF6 . Experimental Results In this section we report on the experiments performed using the system and data described above. First we will present the English results for the three ABSA editions as well as a comparison with previous work. After that we will do the same for 5 additional languages included in the ABSA 2016 edition: Dutch, French, Russian, Spanish and Turkish. The local and clustering features, as described in Section SECREF11 , are the same for every language and evaluation setting. The only change is the clustering lexicons used for the different languages. As stated in section SECREF11 , the best cluster combination is chosen via 5-fold cross validation (CV) on the training data. We first try every permutation with the Clark and Word2vec clusters. Once the best combination is obtained, we then try with the Brown clusters obtaining thus the final model for each language and dataset. English Table TABREF16 provides detailed results on the Opinion Target Extraction (OTE) task for English. We show in bold our best model (ALL) chosen via 5-fold CV on the training data. Moreover, we also show the results of the best models using only one type of clustering feature, namely, the best Brown, Clark and Word2vec models, respectively. The first noteworthy issue is that the same model obtains the best results on the three English datasets. Second, it is also interesting to note the huge gains obtained by the clustering features, between 6-7 points in F1 score across the three ABSA datasets. Third, the results show that the combination of clustering features induced from different data sources is crucial. Fourth, the clustering features improve the recall by 12-15 points in the 2015 and 2016 data, and around 7 points for 2014. Finally, while in 2014 the precision also increases, in the 2015 setting it degrades almost by 4 points in F1 score. Table TABREF17 compares our results with previous work. MIN refers to the multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations with manually developed rules for detecting opinion expressions BIBREF27 . CNN-SenticNet is the 7 layer CNN with Amazon word embeddings, POS, linguistic rules based on syntax patterns and SenticNet BIBREF23 . LSTM is a Long Short Term Memory neural network built on top of word embeddings as proposed by BIBREF21 . WDEmb BIBREF35 uses word and dependency path, linear context and dependency context embedding features the input to a CRF. RNCRF is a joint model with CRF and a recursive neural network whereas CMLA is the Coupled Multilayer Attentions model described in section SECREF4 , both systems proposed by BIBREF26 . DLIREC-NLANGP is the winning system at ABSA 2014 and 2016 BIBREF17 , BIBREF18 , BIBREF19 while the penultimate row refers to our own system for all the three benchmarks (details in Table TABREF16 ). The results of Table TABREF17 show that our system, despite its simplicity, is highly competitive, obtaining the best results on the 2015 and 2016 datasets and a competitive performance on the 2014 benchmark. In particular, we outperform much more complex and language-specific approaches tuned via language-specific features, such as that of DLIREC-NLANGP. Furthermore, while the deep learning approaches (enriched with human-engineered linguistic features) obtain comparable or better results on the 2014 data, that is not the case for the 2015 and 2016 benchmarks, where our system outperforms also the MIN and CMLA models (systems which require manually added rules and gold-standard opinion expressions to obtain their best results, as explained in section SECREF4 ). In this sense, this means that our system obtains better results than MIN and CMLA by learning the targets independently instead of jointly learning the target and those expressions that convey the polarity of the opinion, namely, the opinion expression. There seems to be also a correlation between the size of the datasets and performance, given that the results on the 2014 data are much higher than those obtained using the 2015 and 2016 datasets. This might be due to the fact that the 2014 training set is substantially larger, as detailed in Table TABREF7 . In fact, the smaller datasets seem to affect more the deep learning approaches (LSTM, WDEmb, RNCRF) where only the MIN and CMLA models obtain similar results to ours, albeit using manually added language-specific annotations. Finally, it would have been interesting to compare MIN, CNN-SenticNet and CMLA with our system on the three ABSA benchmarks, but their systems are not publicly available. Multilingual We trained our system for 5 other languages on the ABSA 2016 datasets, using the same strategy as for English. We choose the best Clark-Word2vec combination (with and without Brown clusters) via 5-cross validation on the training data. The features are exactly the same as those used for English, the only change is the data on which the clusters are trained. Table TABREF19 reports on the detailed results obtained for each of the languages. In bold we show the best model chosen via 5-fold CV. Moreover, we also show the best models using only one of each of the clustering features. The first difference with respect to the English results is that the Brown clustering features are, in three out of five settings, detrimental to performance. Second, that combining clustering features is only beneficial for Spanish. Third, the overall results are in general lower than those obtained in the 2016 English data. Finally, the difference between the best results and the results using the Local features is lower than for English, even though the Local results are similar to those obtained with the English datasets (except for Turkish, but this is due to the significantly smaller size of the data, as shown in Table TABREF7 ). We believe that all these four issues are caused, at least partially, by the lack of domain-specific clustering features used for the multilingual experiments. In other words, while for the English experiments we leveraged the Yelp dataset to train the clustering algorithms, in the multilingual setting we first tried with already available clusters induced from the Wikipedia. Thus, it is to be expected that the gains obtained by clustering features obtained from domain-specific data such as Yelp would be superior to those achieved by the clusters trained on out-of-domain data. In spite of this, Table TABREF20 shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish BIBREF28 . Discussion and Error Analysis Considering the simplicity of our approach, we obtain best results for 6 languages and 7 different settings in the Opinion Target Extraction (OTE) benchmark for the restaurant domain using the ABSA 2014-2016 datasets. These results are obtained without linguistic or manually-engineered features, relying on injecting external knowledge from the combination of clustering features to obtain a robust system across languages, outperforming other more complex and language-specific systems. Furthermore, the feature set used is the same for every setting, reducing human intervention to a minimum and establishing a clear methodology for a fast and easy creation of competitive OTE multilingual taggers. The results also confirm the behaviour of these clustering algorithms to provide features for sequence labelling tasks such as OTE and Named Entity Recognition (NER), as previously discussed in BIBREF7 . Thus, in every evaluation setting the best results using Brown clusters as features were obtained when data close to the application domain and text genre, even if relatively small, was used to train the Brown algorithm. This can be clearly seen if we compare the English with the multilingual results. For English, the models including Brown clusters improve the Local features over 3-5 points in F1 score, whereas for Spanish, Dutch and Russian, they worsen performance. The reason is that for English the Yelp dataset is used whereas for the rest of languages the clusters are induced using the Wikipedia, effectively an out-of-domain corpus. The exception is Turkish, for which a 6 point gain in F1 score is obtained, but we believe that is probably due to the small size of the training data used for training the Local model. In contrast, Word2vec clusters clearly benefit from larger amounts of data, as illustrated by the best English Word2vec model being the one trained using the Wikipedia, and not the Yelp dataset, which is closer to the application domain. Finally, the Clark algorithm seems to be the most versatile as it consistently outperforms the other two clustering methods in 4 out of the 8 evaluation settings presented. Summarizing: (i) Brown clusters perform better when leveraged from source data close to the application domain, even if small in size; (ii) Clark clusters are the most robust of the three with respect to the size and domain of the data used; and (iii) for Word2vec size is the crucial factor. The larger the source data the better the performance. Thus, instead of choosing over one clustering type or the other, our system provides a method to effectively combining them, depending on the data sources available, to obtain robust and language independent sequence labelling systems. Finally, results show that our models are particularly competitive when the amount of training data available is small, allowing us to compete with more complex systems including also manually-engineered features, as shown especially by the English results on the 2015 and 2016 data. Error Analysis We will now discuss the shortcomings and most common errors performed by our system for the OTE task. By looking at the overall results in terms of precision and recall, it is possible to see the following patterns: With respect to the Local models, precision is consistently better than recall or, in other words, the coverage of the Local models is quite low. Tables TABREF16 and TABREF19 show that adding clustering features to the Local models allows to improve the recall for every evaluation setting, although with different outcomes. Overall, precision suffers, except for French. Furthermore, in three cases (English 2014, 2016 and Russian) precision is lower than recall, whereas the remaining 5 evaluations show that, despite large improvements in F1 score, most errors in our system are caused by false negatives, as it can be seen in Table TABREF23 . Table TABREF25 displays the top 5 most common false positives and false negative errors for English, Spanish and French. By inspecting our system's output, and both the test and training sets, we found out that there were three main sources of errors: (a) errors caused by ambiguity in the use of certain source forms that may or may not refer to an opinion target; (b) span errors, where the target has only been partially annotated; and (c) unknown targets, which the system was unable to annotate by generalizing on the training data or clusters. With respect to type (a), it is useful to look at the most common errors for all three languages, namely, `place', `food' and `restaurant', which are also among the top 5 most frequent targets in the gold standard sets. By looking at Examples (1-3) we would say that in all three cases `place' should be annotated as opinion target. However, (2) is a false positive (FP), (3) is a false negative (FN) and (1) is an example from the training set in which `place' is annotated as target. This is the case with many instances of `place' for which there seems to be some inconsistency in the actual annotation of the training and test set examples. Example (1): Avoid this place! Example (2): this place is a keeper! Example (3): it is great place to watch sporting events. For other frequent type (a) errors, ambiguity is the main problem. Thus, in Spanish the use of `comida' and `restaurante' is highly ambiguous and causes many FPs and FNs because sometimes it is actually an opinion target whereas in many other other cases it is just referring to the meal or the restaurant themselves without expressing any opinion about them. The same phenomenon occurs for “food” and “restaurant” in English and for `cuisine' and `restaurant' in French. Span type (b) errors are typically caused by long opinion targets such as “filet mignon on top of spinach and mashed potatoes” for which our system annotates “filet” and “spinach” as separate targets, or “chicken curry and chicken tikka masala” which is wrongly tagged as one target. These cases are difficult because on the surface they look similar but the first one refers to one dish only, hence one target, whereas the second one refers to two separate dishes for which two different opinion targets should be annotated. Of course, these cases are particularly hurtful because they count as both FP and FN. Finally, type (c) errors are usually caused by lack of generalization of our system to deal with unknown targets. Example (4-7) contain various mentions to the “Ray's” restaurant, which is in the top 5 errors for the English 2016 test set. Example (4): After 12 years in Seattle Ray's rates as the place we always go back to. Example (5): We were only in Seattle for one night and I'm so glad we picked Rays for dinner! Example (6): I love Dungeness crabs and at Ray's you can get them served in about 6 different ways! Example (7): Imagine my happy surprise upon finding that the views are only the third-best thing about Ray's! Example (8): Ray's is something of a Seattle institution Examples (4), (5) and (7) are FNs, (6) is a FP caused by wrongly identifying the target as “Ray's you”, whereas (8) is not event annotated in the gold standard or by our system, although it should had been. Concluding Remarks In this research note we provide additional empirical experimentation to BIBREF7 , reporting best results for Opinion Target Extraction for 6 languages and 7 datasets using the same set of simple, shallow and language independent features. Furthermore, the results provide some interesting insights with respect to the use of clusters to inject external knowledge via semi-supervised features. First, Brown clusters are particularly beneficial when trained on domain-related data. This seems to be the case in the multilingual setting, where the Brown clusters (trained on out-of-domain Wikipedia data) worsen the system's performance for every language except for Turkish. Second, the results also show that Clark and Word2vec improve results in general, even if induced on out-of-domain data. Thirdly, for best performance it is convenient to combine clusters obtained from diverse data sources, both from in- and out-of-domain corpora. Finally, the results indicate that, even when the amount of training data is small, such as in the 2015 and 2016 English benchmarks, our system's performance remains competitive thanks to the combination of clustering features. This, together with the lack of linguistic features, facilitates the easy and fast development of systems for new domains or languages. These considerations thus confirm the hypotheses stated in BIBREF7 with respect to the use of clustering features to obtain robust sequence taggers across languages and tasks. The system and models for every language and dataset are available as part of the ixa-pipe-opinion module for public use and reproducibility of results. Acknowledgments First, we would like to thank the anonymous reviewers for their comments to improve the paper. We would also like to thank Iñaki San Vicente for his help obtaining the Yelp data. This work has been supported by the Spanish Ministry of Economy and Competitiveness (MINECO/FEDER, UE), under the projects TUNER (TIN2015-65308-C5-1-R) and CROSSTEXT (TIN2015-72646-EXP).
Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context
e98d331faacd50f8ec588d2466b5a85da1f37e6f
e98d331faacd50f8ec588d2466b5a85da1f37e6f_0
Q: Do they compare results against state-of-the-art language models? Text: Introduction One of the principal challenges in computational linguistics is to account for the word order of the document or utterance being processed BIBREF0 . Of course, the numbers of possible phrases grows exponentially with respect to a given phrase length, requiring an approximate approach to summarizing its content. rnn are such an approach, and they are used in various tasks in nlp, such as machine translation BIBREF1 , abstractive summarization BIBREF2 and question answering BIBREF3 . However, rnn, as approximations, suffer from numerical troubles that have been identified, such as that of recovering from past errors when generating phrases. We take interest in a model that mitigates this problem, mrnn, and how it has been and can be combined for new models. To evaluate these models, we use the task of recurrent language modeling, which consists in predicting the next token (character or word) in a document. This paper is organized as follows: rnn and mrnn are introduced respectively in Sections SECREF2 and SECREF3 . Section SECREF4 presents new and existing multiplicative models. Section SECREF5 describes the datasets and experiments performed, as well as results obtained. Sections SECREF6 discusses and concludes our findings. Recurrent neural networks rnn are powerful tools of sequence modeling that can preserve the order of words or characters in a document. A document is therefore a sequence of words, INLINEFORM0 . Given the exponential growth of possible histories with respect to the sequence length, the probability of observing a given sequence needs to be approximated. rnn will make this approximation using the product rule, INLINEFORM1 and updating a hidden state at every time step. This state is first null, INLINEFORM0 Thereafter, it is computed as a function of the past hidden state as well as the input at the current time step, INLINEFORM0 known as the transition function. INLINEFORM0 is a learned function, often taking the form INLINEFORM1 This allows, in theory, for straightforward modeling of sequences of arbitrary length. In practice, rnn encounter some difficulties that need some clever engineering to be mitigated. For example, learning long-term dependencies such as those found in language is not without its share of woes arising from numerical considerations, such as the well-known vanishing gradient problem BIBREF4 . This can be addressed with gating mechanisms, such as lstm BIBREF5 and gru BIBREF6 . A problem that is more specific to generative rnn is their difficulty recovering from past errors BIBREF7 , which BIBREF8 argue arises from having hidden-state transitions that are highly correlated across possible inputs. One approach to adapting rnn to have more input-dependent transition functions is to use the multiplicative "trick" BIBREF9 . This approximates the idea of having the input at each time synthesize a dedicated kernel of parameters dictating the transition from the previous hidden state to the next. These two approaches can be combined, as in the mlstm BIBREF8 . We begin by contending that, in making rnn multiplicative, sharing what is known as the intermediate state does not significantly hinder performance when parameter counts are equal. We verify this with existing as well as new gated models on several well-known language modeling tasks. Multiplicative RNNs Most recurrent neural network architectures, including lstm and gru share the following building block: DISPLAYFORM0 INLINEFORM0 is the candidate hidden state, computed from the previous hidden state, INLINEFORM1 , and the current input, INLINEFORM2 , weighted by the parameter matrices INLINEFORM3 and INLINEFORM4 , respectively. This candidate hidden state may then be passed through gating mechanisms and non-linearities depending on the specific recurrent model. Let us assume for simplicity that the input is a one-hot vector (one component is 1, the rest are 0 BIBREF10 [see p.45]), as it is often the case in nlp. Then, the term INLINEFORM0 is reduced to a single column of INLINEFORM1 and can therefore be thought of as an input-dependent bias in the hidden state transition. As the dependencies we wish to establish between the elements of the sequences under consideration become more distant, the term INLINEFORM2 will have to be significantly larger than this input-dependent bias, INLINEFORM3 , in order to remain unchanged across time-steps. This will mean that from one time-step to the next, the hidden-to-hidden transition will be highly correlated across possible inputs. This can be addressed by having more input-dependent hidden state transitions, making rnn more expressive. In order to remedy the aforementioned problem, each possible input INLINEFORM0 can be given its own matrix INLINEFORM1 parameterizing the contribution of INLINEFORM2 to INLINEFORM3 . DISPLAYFORM0 This is known as a trnn BIBREF9 , because all the matrices can be stacked to form a rank 3 tensor, INLINEFORM0 . The input INLINEFORM1 selects the relevant slice of the tensor in the one-hot case and a weighted sum over all slices in the dense case. The resulting matrix then acts as the appropriate INLINEFORM2 . However, such an approach is impractical because of the high parameter count such a tensor would entail. The tensor can nonetheless be approximated by factorizing it BIBREF11 as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are weight matrices, and INLINEFORM2 is the operator turning a vector INLINEFORM3 into a diagonal matrix where the elements of INLINEFORM4 form the main diagonal of said matrix. Replacing INLINEFORM5 in Equation ( EQREF2 ) by this tensor factorization, we obtain DISPLAYFORM0 where INLINEFORM0 is known as the intermediate state, given by DISPLAYFORM0 Here, INLINEFORM0 refers to the Hadamard or element-wise product of vectors. The intermediate state is the result of having the input apply a learned filter via the new parameter kernel INLINEFORM1 to the factors of the hidden state. It should be noted that the dimensionality of INLINEFORM2 is free and, should it become sufficiently large, the factorization becomes as expressive as the tensor. The ensuing model is known as a mrnn BIBREF9 . Sharing intermediate states While mrnn outperform simple rnn in character-level language modeling, they have been found wanting with respect to the popular lstm BIBREF5 . This prompted BIBREF8 to apply the multiplicative "trick" to lstm resulting in the mlstm, which achieved promising results in several language modeling tasks BIBREF8 . mLSTM Gated rnn, such as lstm and gru, use gates to help signals move through the network. The value of these gates is computed in much the same way as the candidate hidden state, albeit with different parameters. For example, lstm uses two different gates, INLINEFORM0 and INLINEFORM1 in updating its memory cell, INLINEFORM2 , DISPLAYFORM0 It uses another gate, INLINEFORM0 , in mapping INLINEFORM1 to the new hidden state, INLINEFORM2 , DISPLAYFORM0 where INLINEFORM0 is the sigmoid function, squashing its input between 0 and 1. INLINEFORM1 and INLINEFORM2 are known as forget and input gates, respectively. The forget gates allows the network to ignore components of the value of the memory cell at the past state. The input gate filters out certain components of the new hidden state. Finally, the output gates separates the memory cell from the actual hidden state. The values of these gates are computed at each time step as follows: DISPLAYFORM0 DISPLAYFORM1 Each gate has its own set of parameters to infer. If we were to replace each INLINEFORM0 by a tensor factorization as in mrnn, we would obtain a mlstm model. However, in the original formulation of mlstm, there is no factorization of each would-be INLINEFORM1 individually. There is no separate intermediate state for each gate, as one would expect. Instead, a single intermediate state, INLINEFORM2 , is computed to replace INLINEFORM3 in all equations in the system, by Eq. EQREF5 . Furthermore, each gate has its own INLINEFORM4 weighting INLINEFORM5 . Their values are computed as follows: DISPLAYFORM0 DISPLAYFORM1 The model can therefore no longer be understood as as an approximation of the trnn. Nonetheless, it has achieved empirical success in nlp. We therefore try to explore the empirical merits of this shared parametrization and apply them to other rnn architectures. True mLSTM We have presented the original mlstm model with its shared intermediate state. If we wish to remain true to the original multiplicative model, however, we have to factorize every would-be INLINEFORM0 tensor separately. We have: DISPLAYFORM0 DISPLAYFORM1 with each INLINEFORM0 being given by a separate set of parameters: DISPLAYFORM0 We henceforth refer to this model as tmlstm. We sought to apply the same modifications to the gru model, as lstm and gru are known to perform similarly BIBREF12 , BIBREF13 , BIBREF14 . That is, we build a tmgru model, as well as a mgru with a shared intermediate state. GRU The gru was first proposed by BIBREF6 as a lighter, simpler variant of lstm. gru relies on two gates, called, respectively, the update and reset gates, and no additional memory cell. These gates intervene in the computation of the hidden state as follows: DISPLAYFORM0 where the candidate hidden state, INLINEFORM0 , is given by: DISPLAYFORM0 The update gate deletes specific components of the hidden state and replaces them with those of the candidate hidden state, thus updating its content. On the other hand, the reset gate allows the unit to start anew, as if it were reading the first symbol of the input sequence. They are computed much in the same way as the gates of lstm: DISPLAYFORM0 DISPLAYFORM0 True mGRU We can now make gru multiplicative by using the tensor factorization for INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM1 with each INLINEFORM0 given by Eq. EQREF19 . There is a subtlety to computing INLINEFORM1 , as we need to apply the reset gate to INLINEFORM2 . While INLINEFORM3 itself is given by Eq. EQREF4 , INLINEFORM4 is not computed the same way as in mlstm and mrnn. Instead, it is given by: DISPLAYFORM0 mGRU with shared intermediate state Sharing an intermediate state is not as immediate for gru. This is due to the application of INLINEFORM0 , which we need in computing the intermediate state that we want to share. That is, INLINEFORM1 and INLINEFORM2 would both depend on each other. We modify the role of INLINEFORM3 to act as a filter on INLINEFORM4 , rather than a reset on individual components of INLINEFORM5 . Note that, when all components of INLINEFORM6 go to zero, it amounts to having all components of INLINEFORM7 at zero. We have DISPLAYFORM0 and DISPLAYFORM0 INLINEFORM0 is given by DISPLAYFORM0 with INLINEFORM0 the same as in mrnn and mlstm this time, i.e. Eq. EQREF5 . The final hidden state is computed the same way as in the original gru (Eq. EQREF21 ). Experiments in character-level language modeling Character-level language modeling (or character prediction) consists in predicting the next character while reading a document one character at a time. It is a common benchmark for rnn because of the heightened need for shared parametrization when compared to word-level models. We test mgru on two well-known datasets, the Penn Treebank and Text8. Penn Treebank The Penn Treebank dataset BIBREF15 comes from a series of Wall Street Journal articles written in English. Following BIBREF16 , sections 0-20 were used for training, 21-22 for validation and 23-24 for testing, respectively, which amounts to 5.1M, 400K and 450K characters, respectively. The vocabulary consists of 10K lowercase words. All punctuation is removed and numbers were substituted for a single capital N. All words out of vocabulary are replaced by the token <unk>. The training sequences were passed to the model in batches of 32 sequences. Following BIBREF8 , we built an initial mlstm model of 700 units. However, we set the dimensionality of the intermediate state to that of the input in order to keep the model small. We do the same for our mgru, tmlstm and tmgru, changing only the size of the hidden state so that all four models have roughly the same parameter count. We trained it using the Adam optimizer BIBREF17 , selecting the best model on validation over 10 epochs. We apply no regularization other than a checkpoint which keeps the best model over all epochs. The performance of the model is evaluated using cross entropy in bpc, which is INLINEFORM0 of perplexity. All models outperform previously reported results for mlstm BIBREF8 despite lower parameter counts. This is likely due to our relatively small batch size. However, they perform fairly similarly. Encouraged by these results, we built an mgru with both hidden and intermediate state sizes set to that of the original mlstm (700). This version highly surpasses the previous state of the art while still having fewer parameters than previous work. For the sake of comparison, results as well as parameter counts (where available) of our models (bold) and related approaches are presented in Table TABREF34 . mgru and larger mgru, our best models, achieved respectively an error of 1.07 and 0.98 bpc on the test data, setting a new state of the art for this task. Text8 The Text8 corpus BIBREF21 comprises the first 100M plain text characters in English from Wikipedia in 2006. As such, the alphabet consists of the 26 letters of the English alphabet as well as the space character. No vocabulary restrictions were put in place. As per BIBREF16 , the first 90M and 5M characters were used for training and validation, respectively, with the last 5M used for testing. Encouraged by our results on the Penn Treebank dataset, we opted to use similar configurations. However, as the data is one long sequence of characters, we divide it into sequences of 200 characters. We pass these sequences to the model in slightly larger batches of 50 to speed up computation. Again, the dimensionality of the hidden state for mlstm is set at 450 after the original model, and that of the intermediate state is set to the size of the alphabet. The size of the hidden state is adjusted for the other three models as it was for the PTB experiments. The model is also trained using the Adam optimizer over 10 epochs. The best model as per validation data over 10 epochs achieves 1.40 bpc on the test data, slightly surpassing an mlstm of smaller hidden-state dimensionality (450) but larger parameter count. Our results are more modest, as are those of the original mlstm. Once again, results do not vary greatly between models. As with the Penn Treebank, we proceed with building an mgru with both hidden and intermediate state sizes set to 450. This improves performance to 1.21 bpc, setting a new state of the art for this task and surpassing a large mlstm of 1900 units from BIBREF8 despite having far fewer parameters (45M to 5M). For the sake of comparison, results as well as parameter counts of our models and related approaches are presented in Table TABREF36 . It should be noted that some of these models employ dynamic evaluation BIBREF7 , which fits the model further during evaluation. We refer the reader to BIBREF22 . These models are indicated by a star. Conclusion We have found that competitive results can be achieved with mrnn using small models. We have not found significant differences in the approaches presented, despite added non-intuitive parameter-sharing constraints when controlling for model size. Our results are restricted to character-level language modeling. Along this line of thought, previous work on mrnn demonstrated their increased potential when compared to their regular variants BIBREF9 , BIBREF8 , BIBREF23 . We therefore offer other variants as well as a first investigation into their differences. We hope to have evinced the impact of increased flexibility in hidden-state transitions on rnn sequence-modeling capabilities. Further work in this area is required to transpose these findings into applied tasks in nlp.
Yes
fbe22e133fa919f06abd8afbed3395af51d2bfef
fbe22e133fa919f06abd8afbed3395af51d2bfef_0
Q: Do they integrate the second-order term in the mLSTM? Text: Introduction One of the principal challenges in computational linguistics is to account for the word order of the document or utterance being processed BIBREF0 . Of course, the numbers of possible phrases grows exponentially with respect to a given phrase length, requiring an approximate approach to summarizing its content. rnn are such an approach, and they are used in various tasks in nlp, such as machine translation BIBREF1 , abstractive summarization BIBREF2 and question answering BIBREF3 . However, rnn, as approximations, suffer from numerical troubles that have been identified, such as that of recovering from past errors when generating phrases. We take interest in a model that mitigates this problem, mrnn, and how it has been and can be combined for new models. To evaluate these models, we use the task of recurrent language modeling, which consists in predicting the next token (character or word) in a document. This paper is organized as follows: rnn and mrnn are introduced respectively in Sections SECREF2 and SECREF3 . Section SECREF4 presents new and existing multiplicative models. Section SECREF5 describes the datasets and experiments performed, as well as results obtained. Sections SECREF6 discusses and concludes our findings. Recurrent neural networks rnn are powerful tools of sequence modeling that can preserve the order of words or characters in a document. A document is therefore a sequence of words, INLINEFORM0 . Given the exponential growth of possible histories with respect to the sequence length, the probability of observing a given sequence needs to be approximated. rnn will make this approximation using the product rule, INLINEFORM1 and updating a hidden state at every time step. This state is first null, INLINEFORM0 Thereafter, it is computed as a function of the past hidden state as well as the input at the current time step, INLINEFORM0 known as the transition function. INLINEFORM0 is a learned function, often taking the form INLINEFORM1 This allows, in theory, for straightforward modeling of sequences of arbitrary length. In practice, rnn encounter some difficulties that need some clever engineering to be mitigated. For example, learning long-term dependencies such as those found in language is not without its share of woes arising from numerical considerations, such as the well-known vanishing gradient problem BIBREF4 . This can be addressed with gating mechanisms, such as lstm BIBREF5 and gru BIBREF6 . A problem that is more specific to generative rnn is their difficulty recovering from past errors BIBREF7 , which BIBREF8 argue arises from having hidden-state transitions that are highly correlated across possible inputs. One approach to adapting rnn to have more input-dependent transition functions is to use the multiplicative "trick" BIBREF9 . This approximates the idea of having the input at each time synthesize a dedicated kernel of parameters dictating the transition from the previous hidden state to the next. These two approaches can be combined, as in the mlstm BIBREF8 . We begin by contending that, in making rnn multiplicative, sharing what is known as the intermediate state does not significantly hinder performance when parameter counts are equal. We verify this with existing as well as new gated models on several well-known language modeling tasks. Multiplicative RNNs Most recurrent neural network architectures, including lstm and gru share the following building block: DISPLAYFORM0 INLINEFORM0 is the candidate hidden state, computed from the previous hidden state, INLINEFORM1 , and the current input, INLINEFORM2 , weighted by the parameter matrices INLINEFORM3 and INLINEFORM4 , respectively. This candidate hidden state may then be passed through gating mechanisms and non-linearities depending on the specific recurrent model. Let us assume for simplicity that the input is a one-hot vector (one component is 1, the rest are 0 BIBREF10 [see p.45]), as it is often the case in nlp. Then, the term INLINEFORM0 is reduced to a single column of INLINEFORM1 and can therefore be thought of as an input-dependent bias in the hidden state transition. As the dependencies we wish to establish between the elements of the sequences under consideration become more distant, the term INLINEFORM2 will have to be significantly larger than this input-dependent bias, INLINEFORM3 , in order to remain unchanged across time-steps. This will mean that from one time-step to the next, the hidden-to-hidden transition will be highly correlated across possible inputs. This can be addressed by having more input-dependent hidden state transitions, making rnn more expressive. In order to remedy the aforementioned problem, each possible input INLINEFORM0 can be given its own matrix INLINEFORM1 parameterizing the contribution of INLINEFORM2 to INLINEFORM3 . DISPLAYFORM0 This is known as a trnn BIBREF9 , because all the matrices can be stacked to form a rank 3 tensor, INLINEFORM0 . The input INLINEFORM1 selects the relevant slice of the tensor in the one-hot case and a weighted sum over all slices in the dense case. The resulting matrix then acts as the appropriate INLINEFORM2 . However, such an approach is impractical because of the high parameter count such a tensor would entail. The tensor can nonetheless be approximated by factorizing it BIBREF11 as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are weight matrices, and INLINEFORM2 is the operator turning a vector INLINEFORM3 into a diagonal matrix where the elements of INLINEFORM4 form the main diagonal of said matrix. Replacing INLINEFORM5 in Equation ( EQREF2 ) by this tensor factorization, we obtain DISPLAYFORM0 where INLINEFORM0 is known as the intermediate state, given by DISPLAYFORM0 Here, INLINEFORM0 refers to the Hadamard or element-wise product of vectors. The intermediate state is the result of having the input apply a learned filter via the new parameter kernel INLINEFORM1 to the factors of the hidden state. It should be noted that the dimensionality of INLINEFORM2 is free and, should it become sufficiently large, the factorization becomes as expressive as the tensor. The ensuing model is known as a mrnn BIBREF9 . Sharing intermediate states While mrnn outperform simple rnn in character-level language modeling, they have been found wanting with respect to the popular lstm BIBREF5 . This prompted BIBREF8 to apply the multiplicative "trick" to lstm resulting in the mlstm, which achieved promising results in several language modeling tasks BIBREF8 . mLSTM Gated rnn, such as lstm and gru, use gates to help signals move through the network. The value of these gates is computed in much the same way as the candidate hidden state, albeit with different parameters. For example, lstm uses two different gates, INLINEFORM0 and INLINEFORM1 in updating its memory cell, INLINEFORM2 , DISPLAYFORM0 It uses another gate, INLINEFORM0 , in mapping INLINEFORM1 to the new hidden state, INLINEFORM2 , DISPLAYFORM0 where INLINEFORM0 is the sigmoid function, squashing its input between 0 and 1. INLINEFORM1 and INLINEFORM2 are known as forget and input gates, respectively. The forget gates allows the network to ignore components of the value of the memory cell at the past state. The input gate filters out certain components of the new hidden state. Finally, the output gates separates the memory cell from the actual hidden state. The values of these gates are computed at each time step as follows: DISPLAYFORM0 DISPLAYFORM1 Each gate has its own set of parameters to infer. If we were to replace each INLINEFORM0 by a tensor factorization as in mrnn, we would obtain a mlstm model. However, in the original formulation of mlstm, there is no factorization of each would-be INLINEFORM1 individually. There is no separate intermediate state for each gate, as one would expect. Instead, a single intermediate state, INLINEFORM2 , is computed to replace INLINEFORM3 in all equations in the system, by Eq. EQREF5 . Furthermore, each gate has its own INLINEFORM4 weighting INLINEFORM5 . Their values are computed as follows: DISPLAYFORM0 DISPLAYFORM1 The model can therefore no longer be understood as as an approximation of the trnn. Nonetheless, it has achieved empirical success in nlp. We therefore try to explore the empirical merits of this shared parametrization and apply them to other rnn architectures. True mLSTM We have presented the original mlstm model with its shared intermediate state. If we wish to remain true to the original multiplicative model, however, we have to factorize every would-be INLINEFORM0 tensor separately. We have: DISPLAYFORM0 DISPLAYFORM1 with each INLINEFORM0 being given by a separate set of parameters: DISPLAYFORM0 We henceforth refer to this model as tmlstm. We sought to apply the same modifications to the gru model, as lstm and gru are known to perform similarly BIBREF12 , BIBREF13 , BIBREF14 . That is, we build a tmgru model, as well as a mgru with a shared intermediate state. GRU The gru was first proposed by BIBREF6 as a lighter, simpler variant of lstm. gru relies on two gates, called, respectively, the update and reset gates, and no additional memory cell. These gates intervene in the computation of the hidden state as follows: DISPLAYFORM0 where the candidate hidden state, INLINEFORM0 , is given by: DISPLAYFORM0 The update gate deletes specific components of the hidden state and replaces them with those of the candidate hidden state, thus updating its content. On the other hand, the reset gate allows the unit to start anew, as if it were reading the first symbol of the input sequence. They are computed much in the same way as the gates of lstm: DISPLAYFORM0 DISPLAYFORM0 True mGRU We can now make gru multiplicative by using the tensor factorization for INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM1 with each INLINEFORM0 given by Eq. EQREF19 . There is a subtlety to computing INLINEFORM1 , as we need to apply the reset gate to INLINEFORM2 . While INLINEFORM3 itself is given by Eq. EQREF4 , INLINEFORM4 is not computed the same way as in mlstm and mrnn. Instead, it is given by: DISPLAYFORM0 mGRU with shared intermediate state Sharing an intermediate state is not as immediate for gru. This is due to the application of INLINEFORM0 , which we need in computing the intermediate state that we want to share. That is, INLINEFORM1 and INLINEFORM2 would both depend on each other. We modify the role of INLINEFORM3 to act as a filter on INLINEFORM4 , rather than a reset on individual components of INLINEFORM5 . Note that, when all components of INLINEFORM6 go to zero, it amounts to having all components of INLINEFORM7 at zero. We have DISPLAYFORM0 and DISPLAYFORM0 INLINEFORM0 is given by DISPLAYFORM0 with INLINEFORM0 the same as in mrnn and mlstm this time, i.e. Eq. EQREF5 . The final hidden state is computed the same way as in the original gru (Eq. EQREF21 ). Experiments in character-level language modeling Character-level language modeling (or character prediction) consists in predicting the next character while reading a document one character at a time. It is a common benchmark for rnn because of the heightened need for shared parametrization when compared to word-level models. We test mgru on two well-known datasets, the Penn Treebank and Text8. Penn Treebank The Penn Treebank dataset BIBREF15 comes from a series of Wall Street Journal articles written in English. Following BIBREF16 , sections 0-20 were used for training, 21-22 for validation and 23-24 for testing, respectively, which amounts to 5.1M, 400K and 450K characters, respectively. The vocabulary consists of 10K lowercase words. All punctuation is removed and numbers were substituted for a single capital N. All words out of vocabulary are replaced by the token <unk>. The training sequences were passed to the model in batches of 32 sequences. Following BIBREF8 , we built an initial mlstm model of 700 units. However, we set the dimensionality of the intermediate state to that of the input in order to keep the model small. We do the same for our mgru, tmlstm and tmgru, changing only the size of the hidden state so that all four models have roughly the same parameter count. We trained it using the Adam optimizer BIBREF17 , selecting the best model on validation over 10 epochs. We apply no regularization other than a checkpoint which keeps the best model over all epochs. The performance of the model is evaluated using cross entropy in bpc, which is INLINEFORM0 of perplexity. All models outperform previously reported results for mlstm BIBREF8 despite lower parameter counts. This is likely due to our relatively small batch size. However, they perform fairly similarly. Encouraged by these results, we built an mgru with both hidden and intermediate state sizes set to that of the original mlstm (700). This version highly surpasses the previous state of the art while still having fewer parameters than previous work. For the sake of comparison, results as well as parameter counts (where available) of our models (bold) and related approaches are presented in Table TABREF34 . mgru and larger mgru, our best models, achieved respectively an error of 1.07 and 0.98 bpc on the test data, setting a new state of the art for this task. Text8 The Text8 corpus BIBREF21 comprises the first 100M plain text characters in English from Wikipedia in 2006. As such, the alphabet consists of the 26 letters of the English alphabet as well as the space character. No vocabulary restrictions were put in place. As per BIBREF16 , the first 90M and 5M characters were used for training and validation, respectively, with the last 5M used for testing. Encouraged by our results on the Penn Treebank dataset, we opted to use similar configurations. However, as the data is one long sequence of characters, we divide it into sequences of 200 characters. We pass these sequences to the model in slightly larger batches of 50 to speed up computation. Again, the dimensionality of the hidden state for mlstm is set at 450 after the original model, and that of the intermediate state is set to the size of the alphabet. The size of the hidden state is adjusted for the other three models as it was for the PTB experiments. The model is also trained using the Adam optimizer over 10 epochs. The best model as per validation data over 10 epochs achieves 1.40 bpc on the test data, slightly surpassing an mlstm of smaller hidden-state dimensionality (450) but larger parameter count. Our results are more modest, as are those of the original mlstm. Once again, results do not vary greatly between models. As with the Penn Treebank, we proceed with building an mgru with both hidden and intermediate state sizes set to 450. This improves performance to 1.21 bpc, setting a new state of the art for this task and surpassing a large mlstm of 1900 units from BIBREF8 despite having far fewer parameters (45M to 5M). For the sake of comparison, results as well as parameter counts of our models and related approaches are presented in Table TABREF36 . It should be noted that some of these models employ dynamic evaluation BIBREF7 , which fits the model further during evaluation. We refer the reader to BIBREF22 . These models are indicated by a star. Conclusion We have found that competitive results can be achieved with mrnn using small models. We have not found significant differences in the approaches presented, despite added non-intuitive parameter-sharing constraints when controlling for model size. Our results are restricted to character-level language modeling. Along this line of thought, previous work on mrnn demonstrated their increased potential when compared to their regular variants BIBREF9 , BIBREF8 , BIBREF23 . We therefore offer other variants as well as a first investigation into their differences. We hope to have evinced the impact of increased flexibility in hidden-state transitions on rnn sequence-modeling capabilities. Further work in this area is required to transpose these findings into applied tasks in nlp.
Unanswerable
f319f2c3f9339b0ce47478f5aa0c32da387a156e
f319f2c3f9339b0ce47478f5aa0c32da387a156e_0
Q: Which dataset do they train their models on? Text: Introduction One of the principal challenges in computational linguistics is to account for the word order of the document or utterance being processed BIBREF0 . Of course, the numbers of possible phrases grows exponentially with respect to a given phrase length, requiring an approximate approach to summarizing its content. rnn are such an approach, and they are used in various tasks in nlp, such as machine translation BIBREF1 , abstractive summarization BIBREF2 and question answering BIBREF3 . However, rnn, as approximations, suffer from numerical troubles that have been identified, such as that of recovering from past errors when generating phrases. We take interest in a model that mitigates this problem, mrnn, and how it has been and can be combined for new models. To evaluate these models, we use the task of recurrent language modeling, which consists in predicting the next token (character or word) in a document. This paper is organized as follows: rnn and mrnn are introduced respectively in Sections SECREF2 and SECREF3 . Section SECREF4 presents new and existing multiplicative models. Section SECREF5 describes the datasets and experiments performed, as well as results obtained. Sections SECREF6 discusses and concludes our findings. Recurrent neural networks rnn are powerful tools of sequence modeling that can preserve the order of words or characters in a document. A document is therefore a sequence of words, INLINEFORM0 . Given the exponential growth of possible histories with respect to the sequence length, the probability of observing a given sequence needs to be approximated. rnn will make this approximation using the product rule, INLINEFORM1 and updating a hidden state at every time step. This state is first null, INLINEFORM0 Thereafter, it is computed as a function of the past hidden state as well as the input at the current time step, INLINEFORM0 known as the transition function. INLINEFORM0 is a learned function, often taking the form INLINEFORM1 This allows, in theory, for straightforward modeling of sequences of arbitrary length. In practice, rnn encounter some difficulties that need some clever engineering to be mitigated. For example, learning long-term dependencies such as those found in language is not without its share of woes arising from numerical considerations, such as the well-known vanishing gradient problem BIBREF4 . This can be addressed with gating mechanisms, such as lstm BIBREF5 and gru BIBREF6 . A problem that is more specific to generative rnn is their difficulty recovering from past errors BIBREF7 , which BIBREF8 argue arises from having hidden-state transitions that are highly correlated across possible inputs. One approach to adapting rnn to have more input-dependent transition functions is to use the multiplicative "trick" BIBREF9 . This approximates the idea of having the input at each time synthesize a dedicated kernel of parameters dictating the transition from the previous hidden state to the next. These two approaches can be combined, as in the mlstm BIBREF8 . We begin by contending that, in making rnn multiplicative, sharing what is known as the intermediate state does not significantly hinder performance when parameter counts are equal. We verify this with existing as well as new gated models on several well-known language modeling tasks. Multiplicative RNNs Most recurrent neural network architectures, including lstm and gru share the following building block: DISPLAYFORM0 INLINEFORM0 is the candidate hidden state, computed from the previous hidden state, INLINEFORM1 , and the current input, INLINEFORM2 , weighted by the parameter matrices INLINEFORM3 and INLINEFORM4 , respectively. This candidate hidden state may then be passed through gating mechanisms and non-linearities depending on the specific recurrent model. Let us assume for simplicity that the input is a one-hot vector (one component is 1, the rest are 0 BIBREF10 [see p.45]), as it is often the case in nlp. Then, the term INLINEFORM0 is reduced to a single column of INLINEFORM1 and can therefore be thought of as an input-dependent bias in the hidden state transition. As the dependencies we wish to establish between the elements of the sequences under consideration become more distant, the term INLINEFORM2 will have to be significantly larger than this input-dependent bias, INLINEFORM3 , in order to remain unchanged across time-steps. This will mean that from one time-step to the next, the hidden-to-hidden transition will be highly correlated across possible inputs. This can be addressed by having more input-dependent hidden state transitions, making rnn more expressive. In order to remedy the aforementioned problem, each possible input INLINEFORM0 can be given its own matrix INLINEFORM1 parameterizing the contribution of INLINEFORM2 to INLINEFORM3 . DISPLAYFORM0 This is known as a trnn BIBREF9 , because all the matrices can be stacked to form a rank 3 tensor, INLINEFORM0 . The input INLINEFORM1 selects the relevant slice of the tensor in the one-hot case and a weighted sum over all slices in the dense case. The resulting matrix then acts as the appropriate INLINEFORM2 . However, such an approach is impractical because of the high parameter count such a tensor would entail. The tensor can nonetheless be approximated by factorizing it BIBREF11 as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are weight matrices, and INLINEFORM2 is the operator turning a vector INLINEFORM3 into a diagonal matrix where the elements of INLINEFORM4 form the main diagonal of said matrix. Replacing INLINEFORM5 in Equation ( EQREF2 ) by this tensor factorization, we obtain DISPLAYFORM0 where INLINEFORM0 is known as the intermediate state, given by DISPLAYFORM0 Here, INLINEFORM0 refers to the Hadamard or element-wise product of vectors. The intermediate state is the result of having the input apply a learned filter via the new parameter kernel INLINEFORM1 to the factors of the hidden state. It should be noted that the dimensionality of INLINEFORM2 is free and, should it become sufficiently large, the factorization becomes as expressive as the tensor. The ensuing model is known as a mrnn BIBREF9 . Sharing intermediate states While mrnn outperform simple rnn in character-level language modeling, they have been found wanting with respect to the popular lstm BIBREF5 . This prompted BIBREF8 to apply the multiplicative "trick" to lstm resulting in the mlstm, which achieved promising results in several language modeling tasks BIBREF8 . mLSTM Gated rnn, such as lstm and gru, use gates to help signals move through the network. The value of these gates is computed in much the same way as the candidate hidden state, albeit with different parameters. For example, lstm uses two different gates, INLINEFORM0 and INLINEFORM1 in updating its memory cell, INLINEFORM2 , DISPLAYFORM0 It uses another gate, INLINEFORM0 , in mapping INLINEFORM1 to the new hidden state, INLINEFORM2 , DISPLAYFORM0 where INLINEFORM0 is the sigmoid function, squashing its input between 0 and 1. INLINEFORM1 and INLINEFORM2 are known as forget and input gates, respectively. The forget gates allows the network to ignore components of the value of the memory cell at the past state. The input gate filters out certain components of the new hidden state. Finally, the output gates separates the memory cell from the actual hidden state. The values of these gates are computed at each time step as follows: DISPLAYFORM0 DISPLAYFORM1 Each gate has its own set of parameters to infer. If we were to replace each INLINEFORM0 by a tensor factorization as in mrnn, we would obtain a mlstm model. However, in the original formulation of mlstm, there is no factorization of each would-be INLINEFORM1 individually. There is no separate intermediate state for each gate, as one would expect. Instead, a single intermediate state, INLINEFORM2 , is computed to replace INLINEFORM3 in all equations in the system, by Eq. EQREF5 . Furthermore, each gate has its own INLINEFORM4 weighting INLINEFORM5 . Their values are computed as follows: DISPLAYFORM0 DISPLAYFORM1 The model can therefore no longer be understood as as an approximation of the trnn. Nonetheless, it has achieved empirical success in nlp. We therefore try to explore the empirical merits of this shared parametrization and apply them to other rnn architectures. True mLSTM We have presented the original mlstm model with its shared intermediate state. If we wish to remain true to the original multiplicative model, however, we have to factorize every would-be INLINEFORM0 tensor separately. We have: DISPLAYFORM0 DISPLAYFORM1 with each INLINEFORM0 being given by a separate set of parameters: DISPLAYFORM0 We henceforth refer to this model as tmlstm. We sought to apply the same modifications to the gru model, as lstm and gru are known to perform similarly BIBREF12 , BIBREF13 , BIBREF14 . That is, we build a tmgru model, as well as a mgru with a shared intermediate state. GRU The gru was first proposed by BIBREF6 as a lighter, simpler variant of lstm. gru relies on two gates, called, respectively, the update and reset gates, and no additional memory cell. These gates intervene in the computation of the hidden state as follows: DISPLAYFORM0 where the candidate hidden state, INLINEFORM0 , is given by: DISPLAYFORM0 The update gate deletes specific components of the hidden state and replaces them with those of the candidate hidden state, thus updating its content. On the other hand, the reset gate allows the unit to start anew, as if it were reading the first symbol of the input sequence. They are computed much in the same way as the gates of lstm: DISPLAYFORM0 DISPLAYFORM0 True mGRU We can now make gru multiplicative by using the tensor factorization for INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM1 with each INLINEFORM0 given by Eq. EQREF19 . There is a subtlety to computing INLINEFORM1 , as we need to apply the reset gate to INLINEFORM2 . While INLINEFORM3 itself is given by Eq. EQREF4 , INLINEFORM4 is not computed the same way as in mlstm and mrnn. Instead, it is given by: DISPLAYFORM0 mGRU with shared intermediate state Sharing an intermediate state is not as immediate for gru. This is due to the application of INLINEFORM0 , which we need in computing the intermediate state that we want to share. That is, INLINEFORM1 and INLINEFORM2 would both depend on each other. We modify the role of INLINEFORM3 to act as a filter on INLINEFORM4 , rather than a reset on individual components of INLINEFORM5 . Note that, when all components of INLINEFORM6 go to zero, it amounts to having all components of INLINEFORM7 at zero. We have DISPLAYFORM0 and DISPLAYFORM0 INLINEFORM0 is given by DISPLAYFORM0 with INLINEFORM0 the same as in mrnn and mlstm this time, i.e. Eq. EQREF5 . The final hidden state is computed the same way as in the original gru (Eq. EQREF21 ). Experiments in character-level language modeling Character-level language modeling (or character prediction) consists in predicting the next character while reading a document one character at a time. It is a common benchmark for rnn because of the heightened need for shared parametrization when compared to word-level models. We test mgru on two well-known datasets, the Penn Treebank and Text8. Penn Treebank The Penn Treebank dataset BIBREF15 comes from a series of Wall Street Journal articles written in English. Following BIBREF16 , sections 0-20 were used for training, 21-22 for validation and 23-24 for testing, respectively, which amounts to 5.1M, 400K and 450K characters, respectively. The vocabulary consists of 10K lowercase words. All punctuation is removed and numbers were substituted for a single capital N. All words out of vocabulary are replaced by the token <unk>. The training sequences were passed to the model in batches of 32 sequences. Following BIBREF8 , we built an initial mlstm model of 700 units. However, we set the dimensionality of the intermediate state to that of the input in order to keep the model small. We do the same for our mgru, tmlstm and tmgru, changing only the size of the hidden state so that all four models have roughly the same parameter count. We trained it using the Adam optimizer BIBREF17 , selecting the best model on validation over 10 epochs. We apply no regularization other than a checkpoint which keeps the best model over all epochs. The performance of the model is evaluated using cross entropy in bpc, which is INLINEFORM0 of perplexity. All models outperform previously reported results for mlstm BIBREF8 despite lower parameter counts. This is likely due to our relatively small batch size. However, they perform fairly similarly. Encouraged by these results, we built an mgru with both hidden and intermediate state sizes set to that of the original mlstm (700). This version highly surpasses the previous state of the art while still having fewer parameters than previous work. For the sake of comparison, results as well as parameter counts (where available) of our models (bold) and related approaches are presented in Table TABREF34 . mgru and larger mgru, our best models, achieved respectively an error of 1.07 and 0.98 bpc on the test data, setting a new state of the art for this task. Text8 The Text8 corpus BIBREF21 comprises the first 100M plain text characters in English from Wikipedia in 2006. As such, the alphabet consists of the 26 letters of the English alphabet as well as the space character. No vocabulary restrictions were put in place. As per BIBREF16 , the first 90M and 5M characters were used for training and validation, respectively, with the last 5M used for testing. Encouraged by our results on the Penn Treebank dataset, we opted to use similar configurations. However, as the data is one long sequence of characters, we divide it into sequences of 200 characters. We pass these sequences to the model in slightly larger batches of 50 to speed up computation. Again, the dimensionality of the hidden state for mlstm is set at 450 after the original model, and that of the intermediate state is set to the size of the alphabet. The size of the hidden state is adjusted for the other three models as it was for the PTB experiments. The model is also trained using the Adam optimizer over 10 epochs. The best model as per validation data over 10 epochs achieves 1.40 bpc on the test data, slightly surpassing an mlstm of smaller hidden-state dimensionality (450) but larger parameter count. Our results are more modest, as are those of the original mlstm. Once again, results do not vary greatly between models. As with the Penn Treebank, we proceed with building an mgru with both hidden and intermediate state sizes set to 450. This improves performance to 1.21 bpc, setting a new state of the art for this task and surpassing a large mlstm of 1900 units from BIBREF8 despite having far fewer parameters (45M to 5M). For the sake of comparison, results as well as parameter counts of our models and related approaches are presented in Table TABREF36 . It should be noted that some of these models employ dynamic evaluation BIBREF7 , which fits the model further during evaluation. We refer the reader to BIBREF22 . These models are indicated by a star. Conclusion We have found that competitive results can be achieved with mrnn using small models. We have not found significant differences in the approaches presented, despite added non-intuitive parameter-sharing constraints when controlling for model size. Our results are restricted to character-level language modeling. Along this line of thought, previous work on mrnn demonstrated their increased potential when compared to their regular variants BIBREF9 , BIBREF8 , BIBREF23 . We therefore offer other variants as well as a first investigation into their differences. We hope to have evinced the impact of increased flexibility in hidden-state transitions on rnn sequence-modeling capabilities. Further work in this area is required to transpose these findings into applied tasks in nlp.
Penn Treebank, Text8
02417455c05f09d89c2658f39705ac1df1daa0cd
02417455c05f09d89c2658f39705ac1df1daa0cd_0
Q: How much does it minimally cost to fine-tune some model according to benchmarking framework? Text: Introduction Environmental concerns of machine learning research has been rising as the carbon emission of certain tasks like neural architecture search reached an exceptional “ocean boiling” level BIBREF7. Increased carbon emission has been one of the key factors to aggravate global warming . Research and development process like parameter search further increase the environment impact. When using cloud-based machines, the environment impact is strongly correlated with budget. The recent emergence of leaderboards such as SQuAD BIBREF8, GLUE BIBREF0 and SuperGLUE BIBREF9 has greatly boosted the development of advanced models in the NLP community. Pretrained models have proven to be the key ingredient for achieving state of the art in conventional metrics. However, such models can be extremely expensive to train. For example, XLNet-Large BIBREF2 was trained on 512 TPU v3 chips for 500K steps, which costs around 61,440 dollars, let alone staggeringly large carbon emission. Moreover, despite impressive performance gain, the fine-tuning and inference efficiency of NLP models remain under-explored. As recently mentioned in a tweet, the popular AI text adventure game AI Dungeon has reached 100 million inferences. The energy efficiency of inference cost could be critical to both business planning and environment impact. Previous work BIBREF10, BIBREF11 on this topic proposed new metrics like FPO (floating point operations) and new practice to report experimental results based on computing budget. Other benchmarks like BIBREF12 and BIBREF13 compares the efficiency of models on the classic reading comprehension task SQuAD and machine translation tasks. However, there has not been a concrete or practical reference for accurate estimation on NLP model pretraining, fine-tunning and inference considering multi-task energy efficiency. Energy efficiency can be reflected in many metrics including carbon emission, electricity usage, time consumption, number of parameters and FPO as shown in BIBREF10. Carbon emission and electricity are intuitive measures yet either hard to track or hardware-dependent. Number of parameteres does not reflect the acutal cost for model training and inference. FPO is steady for models but cannot be directly used for cost estimation. Here in order to provide a practical reference for model selection for real applications, especially model development outside of academia, we keep track of the time consumption and acutal budget for comparison. Cloud based machines are employed for cost estimation as they are easily accessible and consistent in hardware configuration and performance. In the following sections, we would use time and cost to denote the time elapsed and the acutal budget in model pretraining / training / inference. In most NLP pretrained model setting, there are three phases: pretraining, fine-tuning and inference. If a model is trained from scratch, we consider such model has no pretraining phase but fine-tuned from scratch. Typically pretraining takes several days and hundreds of dollars, according to Table TABREF1. Fine-tuning takes a few minutes to hours, costing a lot less than pretraining phase. Inference takes several milli-seconds to seconds, costing much less than fine-tuning phase. Meanwhile, pretraining is done before fine-tuning once for all, while fine-tuning could be performed multiple times as training data updates. Inference is expected to be called numerous times for downstream applications. Such characteristics make it an intuitive choice to separate different phases during benchmarking. Our Hulk benchmark, as shown in Figure FIGREF5, utilizes several classic datasets that have been widely adopted in the community as benchmarking tasks to benchmark energy efficiency and compares pretrained models in a multi-task fashion. The tasks include natural language inference task MNLI BIBREF14, sentiment analysis task SST-2 BIBREF15 and Named Entity Recognition Task CoNLL-2003 BIBREF16. Such tasks are selected to provide a thourough comparison of end-to-end energy efficiency in pretraining, fine-tuning and inference. With the Hulk benchmark, we quantify the energy efficiency of model pretraining, fine-tuning and inference phase by comparing the time and cost they require to reach certain overall task-specific performance level on selected datasets. The design principle and benchmarking process are detailed in section SECREF2. We also explore the relation between model parameter and fine-tuning efficiency and demonstrate consistency of energy efficiency between tasks for different pretrained models. Benchmark Overview For pretraining phase, the benchmark is designed to favor energy efficient models in terms of time and cost that each model takes to reach certain multi-task performance pretrained from scratch. For example, we keep track of the time and cost of a BERT model pretrained from scratch. After every thousand of pretraining steps, we clone the model for fine-tuning and see if the final performance can reach our cut-off level. When the level is reached, time and cost for pretraining is used for comparison. Models faster or cheaper to pretrain are recommended. For fine-tuning phase, we consider the time and cost each model requires to reach certain multi-task performance fine-tuned from given pretrained models because for each single task with different difficulty and instance number, the fine-tuning characteristics may differ a lot. When pretrained models are used to deal with non-standard downstream task, especially ad hoc application in industry, the training set's difficulty cannot be accurately estimated. Therefore, it's important to compare the multi-task efficiency for model choice. For inference phase, the time and cost of each model making inference for single instance on multiple tasks are considered in the similar fashion as the fine-tuning phase. Benchmark Overview ::: Dataset Overview The datasets we used are widely adopted in NLP community. Quantitative details of datasets can be found in Table TABREF7. The selected tasks are shown below: leftmargin=15pt,labelindent=15pt [enumerate]wide=0pt, leftmargin=15pt, labelwidth=15pt, align=left CoNLL 2003 The Conference on Computational Natural Language Learning (CoNLL-2003) shared task concerns language-independent named entity recognition BIBREF16. The task concentrates on four types of named entities: persons, locations, organizations and other miscellaneous entities. Here we only use the English dataset. The English data is a collection of news wire articles from the Reuters Corpus. Result is reflected as F1 score considering the label accuracy and recall on dev set. MNLI The Multi-Genre Natural Language Inference Corpus BIBREF14 is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The accuracy score is reported as the average of performance on matched and mismatched dev sets. SST-2 The Stanford Sentiment Treebank BIBREF15 consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. Following the setting of GLUE, we also use the two-way (positive/negative) class split, and use only sentence-level labels. The tasks are selected based on how representitve the dataset is. CoNLL 2003 has been a widely used dataset for named entity recognition and acutally requires output of token level labeling. NER is a core NLP task and CoNLL 2003 has been a classic dataset in this area. SST-2 and MNLI are part of the GLUE benchmark, representing sentence level labeling tasks. SST-2 has been frequently used in sentiment analysis across different generations of models. MNLI is a newly introduced large dataset for natural language inference. The training time for MNLI is relatively long and the task requires a lot more training instances. We select the three tasks for a diverse yet practical benchmark for pretrained models without constrain the models to sentence level classification tasks. In addition, their efficiency differ significantly in the fine-tuning and inference phase. Such difference can still be reflected on the final score after normalization as shown in Table TABREF8. Provided with more computing resource , we can bring in more datasets for even more thorough benchmarking in the furture. We illustrate the evaluation criteria in the following subsection. Benchmark Overview ::: Evaluation Criteria In machine learning model training and inference, slight parameter change can have subtle impact on the final result. In order to make a practical reference for pretrained model selection, we compare models' end-to-end performance with respect to the pretraining time, pretraining cost, training time, training cost, inference time, infernce latency and cost following the setting of BIBREF12. For pretraining phase, we design the process to explore how much computing resource is required to reach certain multi-task performance by fine-tuning after the pretraining. Therefore, during model pretraining, after a number of steps, we use the half-pretrained model for fine-tuning and see if the fine-tuned model can reach our cut-off performance. When it does, we count the time and cost in the pretraining process for benchmarking and analysis. For fine-tuning phase, we want to compare the general efficiency of pretrained model reaching cut-off performance on selected dataset. During fine-tuning, we evaluate the half-fine-tuned model on development set after a certain number of steps. When the performance reach our cut-off performance, we count the time and cost in this fine-tuning process for benchmarking and analysis. To be specific, for a single pretrained model, the efficiency score on different tasks is defined as the sum of normalized time and cost. Here we normalize the time and cost because they vary dramatically between tasks. In order to simplify the process, we compute the ratio of BERTLARGE's time and cost to that of each model as the normalized measure as shown in Table TABREF8 and Table TABREF9. For inference phase, we follow the principles in fune-tuning except we use the time and cost of inference for benchmarking. Benchmark Overview ::: Performance Cut-off Selection The selection of performance cutoff could be very critical because we consider certrain models being qualified after reaching certrain performance on development set. Meanwhile, certrain tasks can reach a “sweet point” where after relatively smaller amount of training time, the model reaches performance close to the final results despite negelagible difference. We select the cut-off performance threshold by obersvering the recent state-of-the-art performance on selected tasks. Benchmark Overview ::: Submission to Benchmark Submissions can be made to our benchmark through sending code and results to our Hulk benchmark CodaLab competition following the guidelines in both our FAQ part of website and competition introduction. We require the submissions to include detailed end-to-end model training information including model run time, cost(cloud based machine only), parameter number and part of the development set output for result validation. A training / fine-tuning log including time consumption and dev set performance after certain steps is also required. For inference, development set output, time consumption and hardware / software details should be provided. In order for model reproducity, source code is required. Baseline Settings and Analysis For computation-heavy tasks, we adopt the reported resource requirements in the original papers as the pretraining phase baselines. For fine-tuning and inference phase, we conduct extensive experiments on given hardware (GTX 2080Ti GPU) with different model settings as shown in Table TABREF8 and Table TABREF9. We also collect the devlopment set performance with time in fine-tuning to investigate in how the model are fine-tuned for different tasks. In our fine-tuning setting, we are given a specific hardware and software configuration, we adjust the hyper-parameter to minimize the time required for fine-tuning towards cut-off performance. For example, we choose proper batchsize and learning rate for BERTBASE to make sure the model converges and can reach expected performance as soon as possible with parameter searching. As shown in Figure FIGREF15, the fine-tuning performance curve differs a lot among pretrained models. The x-axis denoting time consumed is shown in log-scale for better comparison of different models. None of the models acutally take the lead in all tasks. However, if two pretrained models are in the same family, such as BERTBASE and BERTLARGE, the model with smaller number of parameters tend to converge a bit faster than the other in the NER and SST-2 task. In the MNLI task, such trend does not apply possibly due to increased diffculty level and training instance number which favor larger model capacity. Even though ALBERT model has a lot less parameters than BERT, according to Table TABREF1, the fine-tuning time of ALBERT model is significantly more than BERT models. This is probably because ALBERT uses large hidden size and more expensive matrix computation. The parameter sharing technique actually makes it harder to fine-tune the model. RoBERTaLARGE model relatively stable in all tasks. Related Work GLUE benchmark BIBREF0 is a popular multi-task benchmarking and diagnosis platform providing score evaluating multi-task NLP models considering multiple single task performance. SuperGLUE BIBREF9 further develops the task and enriches the dataset used in evaluation, making the task more challenging. These multi-task benchmarks does not take computation efficiency into consideration but still innovates the development of pretrained models. MLPerf BIBREF13 compares training and inference efficiency from hardware perspective, providing helpful resources on hardware selection and model training. Their benchmark is limited to focusing on several typical applications including image classification and machine translation. Previous work BIBREF10, BIBREF11 on related topic working towards “Green AI” proposes new metrics like FPO and new principle in efficiency evaluation. We further make more detailed and practical contributions towards model energy efficiency benchmarking. Other work like DAWNBenchmark BIBREF12 looks into the area of end-to-end model efficiency comparison for both computer vision and NLP task SQuAD. The benchmark does not compare multi-task efficiency performance and covered only one NLP task. The Efficient NMT shared task of The 2nd Workshop on Neural Machine Translation and Generation proposed efficiency track to compare neural machine translation models' inference time. Our platform covers more phases and support multi-task comparison. Conclusion We developed the Hulk platform focusing on the energy efficiency evaluation of NLP models based on their end-to-end performance on selected NLP tasks. The Hulk platform compares models in pretraining, fine-tuning and inference phase, making it clear to follow and propose more training and inference efficient models. We have compared the fine-tuning efficiency of given models during baseline testing and demonstrated more parameters lead to slower fine-tuning when using same model but does not hold when model changes.We expect more submissions in the future to flourish and enrich our benchmark. Acknowledgments This work is supported by the Institute of Energy Efficiency (IEE) at UCSB's seed grant in Summer 2019 to improve the energy efficiency of AI and machine learning..
$1,728
6ce057d3b88addf97a30cb188795806239491154
6ce057d3b88addf97a30cb188795806239491154_0
Q: What models are included in baseline benchmarking results? Text: Introduction Environmental concerns of machine learning research has been rising as the carbon emission of certain tasks like neural architecture search reached an exceptional “ocean boiling” level BIBREF7. Increased carbon emission has been one of the key factors to aggravate global warming . Research and development process like parameter search further increase the environment impact. When using cloud-based machines, the environment impact is strongly correlated with budget. The recent emergence of leaderboards such as SQuAD BIBREF8, GLUE BIBREF0 and SuperGLUE BIBREF9 has greatly boosted the development of advanced models in the NLP community. Pretrained models have proven to be the key ingredient for achieving state of the art in conventional metrics. However, such models can be extremely expensive to train. For example, XLNet-Large BIBREF2 was trained on 512 TPU v3 chips for 500K steps, which costs around 61,440 dollars, let alone staggeringly large carbon emission. Moreover, despite impressive performance gain, the fine-tuning and inference efficiency of NLP models remain under-explored. As recently mentioned in a tweet, the popular AI text adventure game AI Dungeon has reached 100 million inferences. The energy efficiency of inference cost could be critical to both business planning and environment impact. Previous work BIBREF10, BIBREF11 on this topic proposed new metrics like FPO (floating point operations) and new practice to report experimental results based on computing budget. Other benchmarks like BIBREF12 and BIBREF13 compares the efficiency of models on the classic reading comprehension task SQuAD and machine translation tasks. However, there has not been a concrete or practical reference for accurate estimation on NLP model pretraining, fine-tunning and inference considering multi-task energy efficiency. Energy efficiency can be reflected in many metrics including carbon emission, electricity usage, time consumption, number of parameters and FPO as shown in BIBREF10. Carbon emission and electricity are intuitive measures yet either hard to track or hardware-dependent. Number of parameteres does not reflect the acutal cost for model training and inference. FPO is steady for models but cannot be directly used for cost estimation. Here in order to provide a practical reference for model selection for real applications, especially model development outside of academia, we keep track of the time consumption and acutal budget for comparison. Cloud based machines are employed for cost estimation as they are easily accessible and consistent in hardware configuration and performance. In the following sections, we would use time and cost to denote the time elapsed and the acutal budget in model pretraining / training / inference. In most NLP pretrained model setting, there are three phases: pretraining, fine-tuning and inference. If a model is trained from scratch, we consider such model has no pretraining phase but fine-tuned from scratch. Typically pretraining takes several days and hundreds of dollars, according to Table TABREF1. Fine-tuning takes a few minutes to hours, costing a lot less than pretraining phase. Inference takes several milli-seconds to seconds, costing much less than fine-tuning phase. Meanwhile, pretraining is done before fine-tuning once for all, while fine-tuning could be performed multiple times as training data updates. Inference is expected to be called numerous times for downstream applications. Such characteristics make it an intuitive choice to separate different phases during benchmarking. Our Hulk benchmark, as shown in Figure FIGREF5, utilizes several classic datasets that have been widely adopted in the community as benchmarking tasks to benchmark energy efficiency and compares pretrained models in a multi-task fashion. The tasks include natural language inference task MNLI BIBREF14, sentiment analysis task SST-2 BIBREF15 and Named Entity Recognition Task CoNLL-2003 BIBREF16. Such tasks are selected to provide a thourough comparison of end-to-end energy efficiency in pretraining, fine-tuning and inference. With the Hulk benchmark, we quantify the energy efficiency of model pretraining, fine-tuning and inference phase by comparing the time and cost they require to reach certain overall task-specific performance level on selected datasets. The design principle and benchmarking process are detailed in section SECREF2. We also explore the relation between model parameter and fine-tuning efficiency and demonstrate consistency of energy efficiency between tasks for different pretrained models. Benchmark Overview For pretraining phase, the benchmark is designed to favor energy efficient models in terms of time and cost that each model takes to reach certain multi-task performance pretrained from scratch. For example, we keep track of the time and cost of a BERT model pretrained from scratch. After every thousand of pretraining steps, we clone the model for fine-tuning and see if the final performance can reach our cut-off level. When the level is reached, time and cost for pretraining is used for comparison. Models faster or cheaper to pretrain are recommended. For fine-tuning phase, we consider the time and cost each model requires to reach certain multi-task performance fine-tuned from given pretrained models because for each single task with different difficulty and instance number, the fine-tuning characteristics may differ a lot. When pretrained models are used to deal with non-standard downstream task, especially ad hoc application in industry, the training set's difficulty cannot be accurately estimated. Therefore, it's important to compare the multi-task efficiency for model choice. For inference phase, the time and cost of each model making inference for single instance on multiple tasks are considered in the similar fashion as the fine-tuning phase. Benchmark Overview ::: Dataset Overview The datasets we used are widely adopted in NLP community. Quantitative details of datasets can be found in Table TABREF7. The selected tasks are shown below: leftmargin=15pt,labelindent=15pt [enumerate]wide=0pt, leftmargin=15pt, labelwidth=15pt, align=left CoNLL 2003 The Conference on Computational Natural Language Learning (CoNLL-2003) shared task concerns language-independent named entity recognition BIBREF16. The task concentrates on four types of named entities: persons, locations, organizations and other miscellaneous entities. Here we only use the English dataset. The English data is a collection of news wire articles from the Reuters Corpus. Result is reflected as F1 score considering the label accuracy and recall on dev set. MNLI The Multi-Genre Natural Language Inference Corpus BIBREF14 is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The accuracy score is reported as the average of performance on matched and mismatched dev sets. SST-2 The Stanford Sentiment Treebank BIBREF15 consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. Following the setting of GLUE, we also use the two-way (positive/negative) class split, and use only sentence-level labels. The tasks are selected based on how representitve the dataset is. CoNLL 2003 has been a widely used dataset for named entity recognition and acutally requires output of token level labeling. NER is a core NLP task and CoNLL 2003 has been a classic dataset in this area. SST-2 and MNLI are part of the GLUE benchmark, representing sentence level labeling tasks. SST-2 has been frequently used in sentiment analysis across different generations of models. MNLI is a newly introduced large dataset for natural language inference. The training time for MNLI is relatively long and the task requires a lot more training instances. We select the three tasks for a diverse yet practical benchmark for pretrained models without constrain the models to sentence level classification tasks. In addition, their efficiency differ significantly in the fine-tuning and inference phase. Such difference can still be reflected on the final score after normalization as shown in Table TABREF8. Provided with more computing resource , we can bring in more datasets for even more thorough benchmarking in the furture. We illustrate the evaluation criteria in the following subsection. Benchmark Overview ::: Evaluation Criteria In machine learning model training and inference, slight parameter change can have subtle impact on the final result. In order to make a practical reference for pretrained model selection, we compare models' end-to-end performance with respect to the pretraining time, pretraining cost, training time, training cost, inference time, infernce latency and cost following the setting of BIBREF12. For pretraining phase, we design the process to explore how much computing resource is required to reach certain multi-task performance by fine-tuning after the pretraining. Therefore, during model pretraining, after a number of steps, we use the half-pretrained model for fine-tuning and see if the fine-tuned model can reach our cut-off performance. When it does, we count the time and cost in the pretraining process for benchmarking and analysis. For fine-tuning phase, we want to compare the general efficiency of pretrained model reaching cut-off performance on selected dataset. During fine-tuning, we evaluate the half-fine-tuned model on development set after a certain number of steps. When the performance reach our cut-off performance, we count the time and cost in this fine-tuning process for benchmarking and analysis. To be specific, for a single pretrained model, the efficiency score on different tasks is defined as the sum of normalized time and cost. Here we normalize the time and cost because they vary dramatically between tasks. In order to simplify the process, we compute the ratio of BERTLARGE's time and cost to that of each model as the normalized measure as shown in Table TABREF8 and Table TABREF9. For inference phase, we follow the principles in fune-tuning except we use the time and cost of inference for benchmarking. Benchmark Overview ::: Performance Cut-off Selection The selection of performance cutoff could be very critical because we consider certrain models being qualified after reaching certrain performance on development set. Meanwhile, certrain tasks can reach a “sweet point” where after relatively smaller amount of training time, the model reaches performance close to the final results despite negelagible difference. We select the cut-off performance threshold by obersvering the recent state-of-the-art performance on selected tasks. Benchmark Overview ::: Submission to Benchmark Submissions can be made to our benchmark through sending code and results to our Hulk benchmark CodaLab competition following the guidelines in both our FAQ part of website and competition introduction. We require the submissions to include detailed end-to-end model training information including model run time, cost(cloud based machine only), parameter number and part of the development set output for result validation. A training / fine-tuning log including time consumption and dev set performance after certain steps is also required. For inference, development set output, time consumption and hardware / software details should be provided. In order for model reproducity, source code is required. Baseline Settings and Analysis For computation-heavy tasks, we adopt the reported resource requirements in the original papers as the pretraining phase baselines. For fine-tuning and inference phase, we conduct extensive experiments on given hardware (GTX 2080Ti GPU) with different model settings as shown in Table TABREF8 and Table TABREF9. We also collect the devlopment set performance with time in fine-tuning to investigate in how the model are fine-tuned for different tasks. In our fine-tuning setting, we are given a specific hardware and software configuration, we adjust the hyper-parameter to minimize the time required for fine-tuning towards cut-off performance. For example, we choose proper batchsize and learning rate for BERTBASE to make sure the model converges and can reach expected performance as soon as possible with parameter searching. As shown in Figure FIGREF15, the fine-tuning performance curve differs a lot among pretrained models. The x-axis denoting time consumed is shown in log-scale for better comparison of different models. None of the models acutally take the lead in all tasks. However, if two pretrained models are in the same family, such as BERTBASE and BERTLARGE, the model with smaller number of parameters tend to converge a bit faster than the other in the NER and SST-2 task. In the MNLI task, such trend does not apply possibly due to increased diffculty level and training instance number which favor larger model capacity. Even though ALBERT model has a lot less parameters than BERT, according to Table TABREF1, the fine-tuning time of ALBERT model is significantly more than BERT models. This is probably because ALBERT uses large hidden size and more expensive matrix computation. The parameter sharing technique actually makes it harder to fine-tune the model. RoBERTaLARGE model relatively stable in all tasks. Related Work GLUE benchmark BIBREF0 is a popular multi-task benchmarking and diagnosis platform providing score evaluating multi-task NLP models considering multiple single task performance. SuperGLUE BIBREF9 further develops the task and enriches the dataset used in evaluation, making the task more challenging. These multi-task benchmarks does not take computation efficiency into consideration but still innovates the development of pretrained models. MLPerf BIBREF13 compares training and inference efficiency from hardware perspective, providing helpful resources on hardware selection and model training. Their benchmark is limited to focusing on several typical applications including image classification and machine translation. Previous work BIBREF10, BIBREF11 on related topic working towards “Green AI” proposes new metrics like FPO and new principle in efficiency evaluation. We further make more detailed and practical contributions towards model energy efficiency benchmarking. Other work like DAWNBenchmark BIBREF12 looks into the area of end-to-end model efficiency comparison for both computer vision and NLP task SQuAD. The benchmark does not compare multi-task efficiency performance and covered only one NLP task. The Efficient NMT shared task of The 2nd Workshop on Neural Machine Translation and Generation proposed efficiency track to compare neural machine translation models' inference time. Our platform covers more phases and support multi-task comparison. Conclusion We developed the Hulk platform focusing on the energy efficiency evaluation of NLP models based on their end-to-end performance on selected NLP tasks. The Hulk platform compares models in pretraining, fine-tuning and inference phase, making it clear to follow and propose more training and inference efficient models. We have compared the fine-tuning efficiency of given models during baseline testing and demonstrated more parameters lead to slower fine-tuning when using same model but does not hold when model changes.We expect more submissions in the future to flourish and enrich our benchmark. Acknowledgments This work is supported by the Institute of Energy Efficiency (IEE) at UCSB's seed grant in Summer 2019 to improve the energy efficiency of AI and machine learning..
BERT, XLNET RoBERTa, ALBERT, DistilBERT
4cab33c8dd46002e0ccafda3916b37366a24a394
4cab33c8dd46002e0ccafda3916b37366a24a394_0
Q: did they compare with other evaluation metrics? Text: Introduction Smart conversational agents are increasingly used across business domains BIBREF0 . We focus on recruitment chatbots that connect recruiters and job-seekers. The recruiter teams we work with are motivated by reasons of scale and accessibility to build and maintain chatbots that provide answers to frequently asked questions (FAQs) based on ML/NLP datasets. Our enterprise clients may have up to INLINEFORM0 employees, and commensurate hiring rate. We have found that almost INLINEFORM1 of end-user (job-seeker) traffic occurs outside of working hours BIBREF1 , which is consistent with the anecdotal reports of our clients that using the chatbot helped reduce email and ticket inquiries of common FAQs. The usefulness of these question-answering conversational UIs depends on building and maintaining the ML/NLP components used in the overall flow (see Fig. FIGREF4 ). In practice, the use of NLP does not improve the experience of many chatbots BIBREF2 , which is unsurprising. Although transparency (being “honest and transparent when explaining why something doesn't work”) is a core design recommendation BIBREF3 , the most commonly available higher-level platforms BIBREF4 do not provide robust ways to understand error and communicate its implications. Interpretability is a challenge beyond chatbots, and is a prerequisite for trust in both individual predictions and the overall model BIBREF5 . The development of the nex-cv metric was driven by a need for a quantification useful to developers, as well as both vendor and client non-developer staff. The nex-cv metric uses plausible negative examples to perform actionable, model-agnostic evaluation of text classification as a component in a chatbot system. It was developed, validated, and used at jobpal, a recruiting chatbot company, in projects where a client company's recruiting team trains and maintains a semi-automated conversational agent's question-answering dataset. Use of ML and NLP is subject to conversation flow design considerations, and internal and external transparency needs BIBREF6 . The chatbots do not generate answers, but provide all responses from a bank that can be managed by client staff. Each of about a dozen live chatbots answers about INLINEFORM0 of incoming questions without having to defer to a human for an answer. About two thirds of the automated guesses are confirmed by recruiters; the rest are corrected (Fig. FIGREF9 ). In “Background”, we relate our work to prior research on curated ML/NLP datasets and evaluation in chatbots. In “Approach”, we describe the metric and provide its application and data context of use. In “Validation Datasets”, we describe the datasets with which this metric has been validated. In “Validation”, we provide results from experiments conducted while developing and using the metric for over a year, addressing each of the needs of the metric, which make it a useful tool for multiple stakeholders in the chatbot design and maintenance process. We contribute a metric definition, its validation with six real projects over the course of one year (2018.Q2 through 2019.Q1), as well as an extensible implementation and testing plan, which is described in “Metric Definition” below. Background Chatbots, or “text messaging-based conversational agents”, have received particular attention in 2010s BIBREF0 . Many modern text-based chatbots use relatively simple NLP tools BIBREF7 , or avoid ML/NLP altogether BIBREF2 , relying on conversation flow design and non-NLP inputs like buttons and quick-replies. Conversational natural-language interfaces for question-answering have an extensive history, which distinguishes open-domain and closed-domain systems BIBREF8 . ML-based chatbots rely on curated data to provide examples for classes (commonly, “intents”), and must balance being widely-accessible to many end-users, but typically specialized in the domain and application goal BIBREF9 . In practice, design and development of a chatbot might assume a domain more focused, or different, than real use reveals. In the chatbot application context, the training dataset of a text classifier may be modified to improve that classifier's performance. The classes — “intents” — are trained with synthetic data and constitute anticipated, rather than actual, use. Existing general-purpose platforms include this synthetic data step as part of design and maintenance BIBREF4 . For example, when it comes to invocations for a voice agent BIBREF10 , dataset construction encodes findings about how users might imagine asking for some action: the authors use a crowd-sourcing mechanism to achieve both consistency useful for classification, and reflection of user expectations in the dataset. We adopt a similar approach: enabling domain-experts (recruiters) to maintain the dataset helps map end-user (job-seeker) needs to recruiters' goals. Data cleaning is not only relevant to chatbots. Model-agnostic systems for understanding machine learning can help iteratively develop machine learning models BIBREF11 . Developers tend to overlook data quality in favor of focusing on algorithmic improvements in building ML systems BIBREF12 . Feature engineering can be made accessible to non-developers or domain experts, e.g. BIBREF5 . We make use of representative examples in the process that surfaces nex-cv to non-developers; in describing this process in “Metric Application”, we map it to the inspection-explanation-refinement process employed in BIBREF11 . Enabling non-developers to perform data cleaning effectively allows developers to focus on model adjustments and feature engineering. There are many ways to measure overall chatbot quality, such as manual check-lists of high-level feature presence BIBREF13 , BIBREF2 . Static analysis and formal verification may be used with a specified flow BIBREF14 . User behavior measurements — both explicit, like ratings or feedback, and implicit, like timing or sentiment — are explored in BIBREF15 . During metric development, we used qualitative feedback from domain-expert users, and key performance indicators (KPIs), such as automatic response rate. Regardless of overall evaluation approach, the use of a classifier as a component in a complex flow demands robust and actionable evaluation of that component. Approach The nex-cv algorithm selects some classes as plausible sources of negative examples, and then uses those to partition the given dataset into training and test data (Alg. SECREF3 ). Negative examples are useful in chatbot component evaluation: the end-user interaction with a chatbot is open-ended, so the system is expected to encounter input that it should recognize as outside its domain. Low-membership classes are candidates for being ignored in training and used as negative examples in testing. Two mutually-exclusive variations use the INLINEFORM0 parameter for cutoff-based negative example selection (Alg. SECREF10 ); and the INLINEFORM1 parameter for proportional negative example selection (Alg. SECREF10 ). We focus on three settings, with INLINEFORM2 set to INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 . The values were tuned for typical distributions (see “Validation Datasets”), and the INLINEFORM6 is a validating measure that is comparable to 5-fold CV (see “Metric Definition”). We assume that low-population classes are all in the same domain as the rest. There may be exceptions: in some cases a new, small category may be created in response to new questions on an emergent topic well outside of the core domain. In our experience, this happens when there is a technical issue elsewhere on the site and the chatbot channel is used as an alternative to escalate the issue. In practice, our system handles this case well, even if the evaluation is less applicable. Such emergent categories either disappear over time: the topic is temporary; or grow: the topic becomes part of the domain. INLINEFORM0 Require data INLINEFORM1 s.t. INLINEFORM2 is the input text that has gold standard label INLINEFORM3 INLINEFORM4 Require label sets INLINEFORM5 s.t. INLINEFORM6 Require test fraction INLINEFORM7 and function INLINEFORM8 which randomly splits out two lists INLINEFORM9 s.t. INLINEFORM10 and INLINEFORM11 INLINEFORM0 INLINEFORM1 INLINEFORM2 s.t. INLINEFORM3 INLINEFORM4 INLINEFORM5 s.t. INLINEFORM6 INLINEFORM7 INLINEFORM8 s.t. INLINEFORM9 INLINEFORM10 s.t. INLINEFORM11 Negative Example Data Provision System Overview A chatbot (Fig. FIGREF7 ) is based on two datasets (Fig. FIGREF4 ), each maintained using a data management tool (Fig. FIGREF9 ). Traffic varies widely between projects, but is typically consistent within a project. To provide a range: in one quarter in 2018, the highest traffic chatbot had about 2000 active users, of which about 250 (ca. INLINEFORM0 ) asked questions. The lowest-traffic chatbot saw INLINEFORM1 weekly active users, of which 15 (ca. INLINEFORM2 ) asked questions. In both cases, a small number (2-4) of recruiters were responsible for maintaining the dataset. The training set of the FAQ portion of each project contains between INLINEFORM0 and INLINEFORM1 training examples across between 100 and 200 distinct classes, usually starting with about INLINEFORM2 classes and creating new classes after the system goes live and new, unanticipated user needs are encountered. To build classifiers on datasets of this size, we use spaCy BIBREF16 and fastText BIBREF17 for vectorization, with transformation for improved performance BIBREF18 , and logistic regression with L2 regularization BIBREF19 . The dataset for shared general intents is maintained through the data management tool by jobpal staff. One such classifier is shared by all companies that use a particular language; projects span English, German, Chinese, and French. About 20 general intents are trained with a total of about INLINEFORM0 to INLINEFORM1 training examples per language. These include intents that control the conversation (e.g., `stop', `help'). This shared language-specific classification step includes entity extraction of profession and city of interest to job-seekers; for example, statements like `I want a [profession] job in [city]` and `do you have any [profession] openings?' should all resolve to `job search' along with extracted keywords. Lastly, this classifier also identifies very common questions that affect all chatbots, but which are not in the recruitment domain: e.g., `how are you?' and `is this a robot?'. The dialog in Fig. FIGREF7 shows the FAQ functionality of the chatbots, powered by classification using company-specific FAQ datasets (see also Fig. FIGREF4 ). In most projects, users who ask question ask between 1 and 2 questions. The FAQ functionality is typically an addition to any existing information displays. Many of our chatbots also feature job discovery, including search and subscriptions. Job search may be triggered by clicking on the button [Look for a job], or writing something like “I would like a [profession] job in [location]” at almost any point in the flow. If either of location or profession is not specified, the user is prompted, and the responses are used to search current openings, which are then shown. The user may submit application or follow external links; the user may also ask questions about specific jobs or the employer more generally. Metric Definition The code available online provides the evaluation implementation, an abstract black-box definition for a classifier, and two strategies to help test an implementation. For integration testing, CustomClassifier.test() can be used to check consistency of classifier wrapper. For functional testing, nex-cv both INLINEFORM0 (Alg. SECREF10 ) and INLINEFORM1 (Alg. SECREF10 ) should yield comparable results to 5-fold cross-validation. INLINEFORM0 Require data INLINEFORM1 s.t. INLINEFORM2 is the input text that has gold standard label INLINEFORM3 INLINEFORM4 Require cutoff parameter INLINEFORM5 INLINEFORM0 in INLINEFORM1 , occurs INLINEFORM2 INLINEFORM3 in INLINEFORM4 , occurs INLINEFORM5 Cutoff Selection of Plausible Negative Example Classes In INLINEFORM0 -fold cross-validation, data is partitioned into INLINEFORM1 sets of INLINEFORM2 such that INLINEFORM3 (let the test fraction INLINEFORM4 ), and the training sets do not overlap. Then, each set of training data is evaluated using the corresponding test set. Evaluation can include many possible measures: accuracy or INLINEFORM5 ; representative examples; confusion matrix; timing data; etc. INLINEFORM0 Require data INLINEFORM1 s.t. INLINEFORM2 is the input text that has gold standard label INLINEFORM3 INLINEFORM4 Require proportion parameter INLINEFORM5 INLINEFORM0 Let INLINEFORM1 , as queue sorted from least to most occurring in INLINEFORM2 INLINEFORM3 INLINEFORM4 Pop element INLINEFORM5 from INLINEFORM6 INLINEFORM7 INLINEFORM8 in INLINEFORM9 , not in INLINEFORM10 Proportional selection of Plausible Negative Example Classes In nex-cv, test fraction INLINEFORM0 is a setting ( INLINEFORM1 for all reported experiments), and data partitions may overlap. As shown in Alg. SECREF3 , representation of high-population classes is enforced. Then, low-population classes are also split using INLINEFORM2 , and included either in the training set with their ground truth label; or in the test set as a negative example. In practice, this results in about INLINEFORM3 of the data being in training. Some low-population classes in the training set should be included as this is representative of the dataset shape; many low-population classes may affect the classification and confidence overall, depending on classification approach. Low-population classes are typically rare or relatively recent topics, so interpreting them as plausible negative examples helps to test the classifier, and its measure of confidence. Validation Datasets The seven datasets to which we report having applied the nex-cv metric are in the recruitment domain. Each dataset has about INLINEFORM0 classes, and most have classes with 5-10 members as well as classes with over a hundred. To characterize the content, we trained a classifier on an anonymous benchmark dataset and used it to classify a random recent sample of INLINEFORM3 English-language questions. About INLINEFORM0 of recent end-user queries in English fall into 5 categories: (1) Application Process; (2) Salary; (3) Professional Growth and Development; (4) Internships; (5) Contact a Human. Another INLINEFORM0 of end-user queries fall into 14 categories: Application Evaluation; Application Deadline; Application Delete or Modify; How Long to Apply and Hear Back; Qualification; Application Documents; Language Expectations; Thesis; Working Hours; Location; Starting at the Company; Commute; Equipment; Benefits. About INLINEFORM0 of overall requests were not recognized (with a confidence of INLINEFORM1 or higher) as any of the categories in the anonymous benchmarking set. Upon manual inspection, some of these test questions were noise, and many were topics specific to particular company FAQs, such as concerning specific work-study programs; details of the application software; and other more niche topics. The classification datasets share some overlapping topics; each also has a specific set of additional topics. Each dataset has the typical shape of a few larger classes, and many smaller ones, which have an indirect relationship to what data is expected. The use of low-population classes as plausible negative examples takes advantage of both the content of the data (closed-domain, with a topic-specific core but a considerable number of additional, outlying topics) and the membership distribution of the classes (a few well-populated ones, and many much smaller classes). The nex-cv metric may apply in other problems or domains, but we developed and validated it in a series of experiments with six live datasets, in English and German (see Fig. FIGREF17 , of which chatbot E is also the subject of Fig. FIGREF14 ), in addition to the seventh aggregate anonymous benchmark dataset described above, which was used for the comparison in Fig. FIGREF18 . Validation The following case studies validate the metric relative to each of three requirements: (1) enable data quality improvements, as in Fig. FIGREF14 , (2) not be overly-optimistic, as in Fig. FIGREF17 , (3) enable model-agnostic comparison, as in Fig. FIGREF18 . Metric Application The goal of usefulness includes interpretability: “provid[ing] qualitative understanding between the input variables and the response... [taking] into account the user’s limitations in BIBREF5 . Usefulness combines this with actionable support of chatbot design. The users include, in this case, non-developer staff on both vendor and client side: recruiters and project managers. Through iteration on internal tools, we found that displaying performance information in the form of, “which 2-3 topics are the biggest problem?” was most effective for understanding, communication, and action. Over the course of a year, the nex-cv metric informed this analysis. During this time, both qualitative feedback and KPIs have validated that it was effective both for trust and for the end-user experience. The automation rate KPI — proportion of incoming queries that did not need deferral to a human, but answered immediately, as in Fig. FIGREF7 — has risen to and remained at INLINEFORM0 across projects mainly due to data quality support during both design and maintenance. In one illustrative project (Fig. FIGREF14 ) the automation rate had become as low as INLINEFORM0 . The recruiters responsible for dealing with escalated questions became frustrated to see questions come up that had been asked before. Action needed to be taken, and this project became one of the first case studies for developing the application of nex-cv internally. After intervention, automated response rate rose into the desirable 70s range and remained. The quality improvements were explained and implemented by an internal project manager, who pro-actively included client domain-expert users in explanations over calls and emails over what improvements were made and why. Initially, INLINEFORM1 classes were trained with INLINEFORM2 examples, with long tail of low-population classes. Following intervention, dataset grew by INLINEFORM3 and, despite concept drift risk, did not deteriorate. To use nex-cv, we aggregate the confusion matrix from the INLINEFORM0 setting and rank how confused a pair of categories is. The most confused 2-3 pairs of classes are then the focus of conceptual, manual review in the dataset. Evaluation is performed again, producing a new ranking that guides the next 2-3 classes to focus on, until the metric falls below an acceptable threshold. There are other sources of classification error, but overlap between conceptually related pairs of classes accounts for most of the data quality problems we encounter in the datasets in practice, and are particularly understandable than other forms of error. This relatively simple approach is implemented as a Jupyter notebook accessible to non-developers (internal project managers). The details of pairwise measures and acceptability threshold were developed iteratively based on project manager feedback. The project managers also honed processes and intuitions for communicating this information to clients effectively. In extreme situations as that shown in Fig. FIGREF14 the project managers made a presentation to get buy-in and implemented data quality improvements on their own. However, the typical practice now is to provide explanations, in calls and emails, of the “confusions” between one or few pairs of specific categories to the client. This practice builds awareness of data quality across stakeholders, and the domain-experts (recruiters) are better able to use the system to create the envisioned chatbot functionality without major intervention. As the number of projects grows, the metric can be used by project managers to monitor and prioritize data quality improvement tasks. Metric is not Overly Optimistic One of the practical motivations for a new metric was the sense that the existing metrics were too optimistic to be useful to improve chatbot behavior in response to overall qualitative feedback. As shown in Fig. FIGREF14 , for example, the typical INLINEFORM0 metric is more optimistic than nex-cv. As an initial step of validating the metric, we applied it in the case of six under-performing datasets that required some intervention. Fig. FIGREF14 shows the differences in data abundance and classifier quality across these six pseudonymized snapshots. Internal QA staff gave the human rating scores by considering whether a question-answer pairs seemed reasonable: they could pick “yes” “no” and “can't tell”; in most cases, the appropriateness was not ambiguous. As shown in Fig. FIGREF17 , the human-rater estimate of quality is consistently more lenient than any of the automated measures. The Chatbot E in this case is the same project as shown in Fig. FIGREF14 , prior to improvements. Four of the six datasets analyzed had a very big difference between the human estimate of quality and the automated estimate, which, upon investigation, revealed that there were significant conceptual overlaps in the classes that the recruiters had trained, and the answers given. So, indeed, the classifier was making surprisingly adequate guesses, but which were very low-confidence. Following the intervention described in the previous section, which includes ongoing communication of any outstanding problems by project managers to recruiter teams, this type of error became rare and quickly-addressed. Metric can be used for Internal and External Comparison We used the nex-cv metric to help compare the performance of our classification component with two leading vendors for general-purpose chatbot development. Fig. FIGREF18 shows the comparison between jobpal and 2 leading vendors in the space. The three settings of the metric were aggregated to provide a plausible range of estimated performance. The range of accuracy was significantly higher for our domain-specific classifier, than those trained using general-purpose tools. Aside from being useful to classify into known classes, the metric must account for fallback or escalation. This may be modeled as a separate class (as one of the external engines does with the “fallback” intent), or by relying on confidence scores from classifiers that produce measures of confidence (all engines provide some estimate of confidence that may be used). The “carefulness” score was included to represent how useful the confidence score is for deciding when to decline an answer: the number of incorrect guesses that were rejected due to too-low confidence scores divided by total no-answer-given cases (no guess or low-confidence guess). Fig. FIGREF18 shows that the performance of our ML/NLP component on our domain-specific dataset is better than that of two popular general-purpose platforms, both in terms of classification accuracy, and rate of deferral due to low-confidence answers. This comparison mechanism validates our system relative to existing external services in a way that is interpretable by various internal stakeholders, not only the developer staff. Conclusion We described and validated the nex-cv metric, which is a modification of cross-validation that makes use of plausible negative examples from low-population classes in the datasets typical of our application area and domain. Existing chatbot guidelines leave error handling to the designer: “transparency” is included as an important topic BIBREF3 , but, in practice, why something does not work, and under what conditions, can puzzle designers and developers, not just end-users. We presented on a metric that can be used by a variety of relevant stakeholders to understand, communicate, and improve text classifier performance by improving data quality. In future work, we aim to explore other text classifier and chatbot evaluation strategies, keeping in mind the needs for understandability and transparency in this multi-stakeholder design process and maintenance practice.
Yes
9dd65dca9dffd2bf78ecc22b17824edc885d1fa2
9dd65dca9dffd2bf78ecc22b17824edc885d1fa2_0
Q: which datasets were used in validation? Text: Introduction Smart conversational agents are increasingly used across business domains BIBREF0 . We focus on recruitment chatbots that connect recruiters and job-seekers. The recruiter teams we work with are motivated by reasons of scale and accessibility to build and maintain chatbots that provide answers to frequently asked questions (FAQs) based on ML/NLP datasets. Our enterprise clients may have up to INLINEFORM0 employees, and commensurate hiring rate. We have found that almost INLINEFORM1 of end-user (job-seeker) traffic occurs outside of working hours BIBREF1 , which is consistent with the anecdotal reports of our clients that using the chatbot helped reduce email and ticket inquiries of common FAQs. The usefulness of these question-answering conversational UIs depends on building and maintaining the ML/NLP components used in the overall flow (see Fig. FIGREF4 ). In practice, the use of NLP does not improve the experience of many chatbots BIBREF2 , which is unsurprising. Although transparency (being “honest and transparent when explaining why something doesn't work”) is a core design recommendation BIBREF3 , the most commonly available higher-level platforms BIBREF4 do not provide robust ways to understand error and communicate its implications. Interpretability is a challenge beyond chatbots, and is a prerequisite for trust in both individual predictions and the overall model BIBREF5 . The development of the nex-cv metric was driven by a need for a quantification useful to developers, as well as both vendor and client non-developer staff. The nex-cv metric uses plausible negative examples to perform actionable, model-agnostic evaluation of text classification as a component in a chatbot system. It was developed, validated, and used at jobpal, a recruiting chatbot company, in projects where a client company's recruiting team trains and maintains a semi-automated conversational agent's question-answering dataset. Use of ML and NLP is subject to conversation flow design considerations, and internal and external transparency needs BIBREF6 . The chatbots do not generate answers, but provide all responses from a bank that can be managed by client staff. Each of about a dozen live chatbots answers about INLINEFORM0 of incoming questions without having to defer to a human for an answer. About two thirds of the automated guesses are confirmed by recruiters; the rest are corrected (Fig. FIGREF9 ). In “Background”, we relate our work to prior research on curated ML/NLP datasets and evaluation in chatbots. In “Approach”, we describe the metric and provide its application and data context of use. In “Validation Datasets”, we describe the datasets with which this metric has been validated. In “Validation”, we provide results from experiments conducted while developing and using the metric for over a year, addressing each of the needs of the metric, which make it a useful tool for multiple stakeholders in the chatbot design and maintenance process. We contribute a metric definition, its validation with six real projects over the course of one year (2018.Q2 through 2019.Q1), as well as an extensible implementation and testing plan, which is described in “Metric Definition” below. Background Chatbots, or “text messaging-based conversational agents”, have received particular attention in 2010s BIBREF0 . Many modern text-based chatbots use relatively simple NLP tools BIBREF7 , or avoid ML/NLP altogether BIBREF2 , relying on conversation flow design and non-NLP inputs like buttons and quick-replies. Conversational natural-language interfaces for question-answering have an extensive history, which distinguishes open-domain and closed-domain systems BIBREF8 . ML-based chatbots rely on curated data to provide examples for classes (commonly, “intents”), and must balance being widely-accessible to many end-users, but typically specialized in the domain and application goal BIBREF9 . In practice, design and development of a chatbot might assume a domain more focused, or different, than real use reveals. In the chatbot application context, the training dataset of a text classifier may be modified to improve that classifier's performance. The classes — “intents” — are trained with synthetic data and constitute anticipated, rather than actual, use. Existing general-purpose platforms include this synthetic data step as part of design and maintenance BIBREF4 . For example, when it comes to invocations for a voice agent BIBREF10 , dataset construction encodes findings about how users might imagine asking for some action: the authors use a crowd-sourcing mechanism to achieve both consistency useful for classification, and reflection of user expectations in the dataset. We adopt a similar approach: enabling domain-experts (recruiters) to maintain the dataset helps map end-user (job-seeker) needs to recruiters' goals. Data cleaning is not only relevant to chatbots. Model-agnostic systems for understanding machine learning can help iteratively develop machine learning models BIBREF11 . Developers tend to overlook data quality in favor of focusing on algorithmic improvements in building ML systems BIBREF12 . Feature engineering can be made accessible to non-developers or domain experts, e.g. BIBREF5 . We make use of representative examples in the process that surfaces nex-cv to non-developers; in describing this process in “Metric Application”, we map it to the inspection-explanation-refinement process employed in BIBREF11 . Enabling non-developers to perform data cleaning effectively allows developers to focus on model adjustments and feature engineering. There are many ways to measure overall chatbot quality, such as manual check-lists of high-level feature presence BIBREF13 , BIBREF2 . Static analysis and formal verification may be used with a specified flow BIBREF14 . User behavior measurements — both explicit, like ratings or feedback, and implicit, like timing or sentiment — are explored in BIBREF15 . During metric development, we used qualitative feedback from domain-expert users, and key performance indicators (KPIs), such as automatic response rate. Regardless of overall evaluation approach, the use of a classifier as a component in a complex flow demands robust and actionable evaluation of that component. Approach The nex-cv algorithm selects some classes as plausible sources of negative examples, and then uses those to partition the given dataset into training and test data (Alg. SECREF3 ). Negative examples are useful in chatbot component evaluation: the end-user interaction with a chatbot is open-ended, so the system is expected to encounter input that it should recognize as outside its domain. Low-membership classes are candidates for being ignored in training and used as negative examples in testing. Two mutually-exclusive variations use the INLINEFORM0 parameter for cutoff-based negative example selection (Alg. SECREF10 ); and the INLINEFORM1 parameter for proportional negative example selection (Alg. SECREF10 ). We focus on three settings, with INLINEFORM2 set to INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 . The values were tuned for typical distributions (see “Validation Datasets”), and the INLINEFORM6 is a validating measure that is comparable to 5-fold CV (see “Metric Definition”). We assume that low-population classes are all in the same domain as the rest. There may be exceptions: in some cases a new, small category may be created in response to new questions on an emergent topic well outside of the core domain. In our experience, this happens when there is a technical issue elsewhere on the site and the chatbot channel is used as an alternative to escalate the issue. In practice, our system handles this case well, even if the evaluation is less applicable. Such emergent categories either disappear over time: the topic is temporary; or grow: the topic becomes part of the domain. INLINEFORM0 Require data INLINEFORM1 s.t. INLINEFORM2 is the input text that has gold standard label INLINEFORM3 INLINEFORM4 Require label sets INLINEFORM5 s.t. INLINEFORM6 Require test fraction INLINEFORM7 and function INLINEFORM8 which randomly splits out two lists INLINEFORM9 s.t. INLINEFORM10 and INLINEFORM11 INLINEFORM0 INLINEFORM1 INLINEFORM2 s.t. INLINEFORM3 INLINEFORM4 INLINEFORM5 s.t. INLINEFORM6 INLINEFORM7 INLINEFORM8 s.t. INLINEFORM9 INLINEFORM10 s.t. INLINEFORM11 Negative Example Data Provision System Overview A chatbot (Fig. FIGREF7 ) is based on two datasets (Fig. FIGREF4 ), each maintained using a data management tool (Fig. FIGREF9 ). Traffic varies widely between projects, but is typically consistent within a project. To provide a range: in one quarter in 2018, the highest traffic chatbot had about 2000 active users, of which about 250 (ca. INLINEFORM0 ) asked questions. The lowest-traffic chatbot saw INLINEFORM1 weekly active users, of which 15 (ca. INLINEFORM2 ) asked questions. In both cases, a small number (2-4) of recruiters were responsible for maintaining the dataset. The training set of the FAQ portion of each project contains between INLINEFORM0 and INLINEFORM1 training examples across between 100 and 200 distinct classes, usually starting with about INLINEFORM2 classes and creating new classes after the system goes live and new, unanticipated user needs are encountered. To build classifiers on datasets of this size, we use spaCy BIBREF16 and fastText BIBREF17 for vectorization, with transformation for improved performance BIBREF18 , and logistic regression with L2 regularization BIBREF19 . The dataset for shared general intents is maintained through the data management tool by jobpal staff. One such classifier is shared by all companies that use a particular language; projects span English, German, Chinese, and French. About 20 general intents are trained with a total of about INLINEFORM0 to INLINEFORM1 training examples per language. These include intents that control the conversation (e.g., `stop', `help'). This shared language-specific classification step includes entity extraction of profession and city of interest to job-seekers; for example, statements like `I want a [profession] job in [city]` and `do you have any [profession] openings?' should all resolve to `job search' along with extracted keywords. Lastly, this classifier also identifies very common questions that affect all chatbots, but which are not in the recruitment domain: e.g., `how are you?' and `is this a robot?'. The dialog in Fig. FIGREF7 shows the FAQ functionality of the chatbots, powered by classification using company-specific FAQ datasets (see also Fig. FIGREF4 ). In most projects, users who ask question ask between 1 and 2 questions. The FAQ functionality is typically an addition to any existing information displays. Many of our chatbots also feature job discovery, including search and subscriptions. Job search may be triggered by clicking on the button [Look for a job], or writing something like “I would like a [profession] job in [location]” at almost any point in the flow. If either of location or profession is not specified, the user is prompted, and the responses are used to search current openings, which are then shown. The user may submit application or follow external links; the user may also ask questions about specific jobs or the employer more generally. Metric Definition The code available online provides the evaluation implementation, an abstract black-box definition for a classifier, and two strategies to help test an implementation. For integration testing, CustomClassifier.test() can be used to check consistency of classifier wrapper. For functional testing, nex-cv both INLINEFORM0 (Alg. SECREF10 ) and INLINEFORM1 (Alg. SECREF10 ) should yield comparable results to 5-fold cross-validation. INLINEFORM0 Require data INLINEFORM1 s.t. INLINEFORM2 is the input text that has gold standard label INLINEFORM3 INLINEFORM4 Require cutoff parameter INLINEFORM5 INLINEFORM0 in INLINEFORM1 , occurs INLINEFORM2 INLINEFORM3 in INLINEFORM4 , occurs INLINEFORM5 Cutoff Selection of Plausible Negative Example Classes In INLINEFORM0 -fold cross-validation, data is partitioned into INLINEFORM1 sets of INLINEFORM2 such that INLINEFORM3 (let the test fraction INLINEFORM4 ), and the training sets do not overlap. Then, each set of training data is evaluated using the corresponding test set. Evaluation can include many possible measures: accuracy or INLINEFORM5 ; representative examples; confusion matrix; timing data; etc. INLINEFORM0 Require data INLINEFORM1 s.t. INLINEFORM2 is the input text that has gold standard label INLINEFORM3 INLINEFORM4 Require proportion parameter INLINEFORM5 INLINEFORM0 Let INLINEFORM1 , as queue sorted from least to most occurring in INLINEFORM2 INLINEFORM3 INLINEFORM4 Pop element INLINEFORM5 from INLINEFORM6 INLINEFORM7 INLINEFORM8 in INLINEFORM9 , not in INLINEFORM10 Proportional selection of Plausible Negative Example Classes In nex-cv, test fraction INLINEFORM0 is a setting ( INLINEFORM1 for all reported experiments), and data partitions may overlap. As shown in Alg. SECREF3 , representation of high-population classes is enforced. Then, low-population classes are also split using INLINEFORM2 , and included either in the training set with their ground truth label; or in the test set as a negative example. In practice, this results in about INLINEFORM3 of the data being in training. Some low-population classes in the training set should be included as this is representative of the dataset shape; many low-population classes may affect the classification and confidence overall, depending on classification approach. Low-population classes are typically rare or relatively recent topics, so interpreting them as plausible negative examples helps to test the classifier, and its measure of confidence. Validation Datasets The seven datasets to which we report having applied the nex-cv metric are in the recruitment domain. Each dataset has about INLINEFORM0 classes, and most have classes with 5-10 members as well as classes with over a hundred. To characterize the content, we trained a classifier on an anonymous benchmark dataset and used it to classify a random recent sample of INLINEFORM3 English-language questions. About INLINEFORM0 of recent end-user queries in English fall into 5 categories: (1) Application Process; (2) Salary; (3) Professional Growth and Development; (4) Internships; (5) Contact a Human. Another INLINEFORM0 of end-user queries fall into 14 categories: Application Evaluation; Application Deadline; Application Delete or Modify; How Long to Apply and Hear Back; Qualification; Application Documents; Language Expectations; Thesis; Working Hours; Location; Starting at the Company; Commute; Equipment; Benefits. About INLINEFORM0 of overall requests were not recognized (with a confidence of INLINEFORM1 or higher) as any of the categories in the anonymous benchmarking set. Upon manual inspection, some of these test questions were noise, and many were topics specific to particular company FAQs, such as concerning specific work-study programs; details of the application software; and other more niche topics. The classification datasets share some overlapping topics; each also has a specific set of additional topics. Each dataset has the typical shape of a few larger classes, and many smaller ones, which have an indirect relationship to what data is expected. The use of low-population classes as plausible negative examples takes advantage of both the content of the data (closed-domain, with a topic-specific core but a considerable number of additional, outlying topics) and the membership distribution of the classes (a few well-populated ones, and many much smaller classes). The nex-cv metric may apply in other problems or domains, but we developed and validated it in a series of experiments with six live datasets, in English and German (see Fig. FIGREF17 , of which chatbot E is also the subject of Fig. FIGREF14 ), in addition to the seventh aggregate anonymous benchmark dataset described above, which was used for the comparison in Fig. FIGREF18 . Validation The following case studies validate the metric relative to each of three requirements: (1) enable data quality improvements, as in Fig. FIGREF14 , (2) not be overly-optimistic, as in Fig. FIGREF17 , (3) enable model-agnostic comparison, as in Fig. FIGREF18 . Metric Application The goal of usefulness includes interpretability: “provid[ing] qualitative understanding between the input variables and the response... [taking] into account the user’s limitations in BIBREF5 . Usefulness combines this with actionable support of chatbot design. The users include, in this case, non-developer staff on both vendor and client side: recruiters and project managers. Through iteration on internal tools, we found that displaying performance information in the form of, “which 2-3 topics are the biggest problem?” was most effective for understanding, communication, and action. Over the course of a year, the nex-cv metric informed this analysis. During this time, both qualitative feedback and KPIs have validated that it was effective both for trust and for the end-user experience. The automation rate KPI — proportion of incoming queries that did not need deferral to a human, but answered immediately, as in Fig. FIGREF7 — has risen to and remained at INLINEFORM0 across projects mainly due to data quality support during both design and maintenance. In one illustrative project (Fig. FIGREF14 ) the automation rate had become as low as INLINEFORM0 . The recruiters responsible for dealing with escalated questions became frustrated to see questions come up that had been asked before. Action needed to be taken, and this project became one of the first case studies for developing the application of nex-cv internally. After intervention, automated response rate rose into the desirable 70s range and remained. The quality improvements were explained and implemented by an internal project manager, who pro-actively included client domain-expert users in explanations over calls and emails over what improvements were made and why. Initially, INLINEFORM1 classes were trained with INLINEFORM2 examples, with long tail of low-population classes. Following intervention, dataset grew by INLINEFORM3 and, despite concept drift risk, did not deteriorate. To use nex-cv, we aggregate the confusion matrix from the INLINEFORM0 setting and rank how confused a pair of categories is. The most confused 2-3 pairs of classes are then the focus of conceptual, manual review in the dataset. Evaluation is performed again, producing a new ranking that guides the next 2-3 classes to focus on, until the metric falls below an acceptable threshold. There are other sources of classification error, but overlap between conceptually related pairs of classes accounts for most of the data quality problems we encounter in the datasets in practice, and are particularly understandable than other forms of error. This relatively simple approach is implemented as a Jupyter notebook accessible to non-developers (internal project managers). The details of pairwise measures and acceptability threshold were developed iteratively based on project manager feedback. The project managers also honed processes and intuitions for communicating this information to clients effectively. In extreme situations as that shown in Fig. FIGREF14 the project managers made a presentation to get buy-in and implemented data quality improvements on their own. However, the typical practice now is to provide explanations, in calls and emails, of the “confusions” between one or few pairs of specific categories to the client. This practice builds awareness of data quality across stakeholders, and the domain-experts (recruiters) are better able to use the system to create the envisioned chatbot functionality without major intervention. As the number of projects grows, the metric can be used by project managers to monitor and prioritize data quality improvement tasks. Metric is not Overly Optimistic One of the practical motivations for a new metric was the sense that the existing metrics were too optimistic to be useful to improve chatbot behavior in response to overall qualitative feedback. As shown in Fig. FIGREF14 , for example, the typical INLINEFORM0 metric is more optimistic than nex-cv. As an initial step of validating the metric, we applied it in the case of six under-performing datasets that required some intervention. Fig. FIGREF14 shows the differences in data abundance and classifier quality across these six pseudonymized snapshots. Internal QA staff gave the human rating scores by considering whether a question-answer pairs seemed reasonable: they could pick “yes” “no” and “can't tell”; in most cases, the appropriateness was not ambiguous. As shown in Fig. FIGREF17 , the human-rater estimate of quality is consistently more lenient than any of the automated measures. The Chatbot E in this case is the same project as shown in Fig. FIGREF14 , prior to improvements. Four of the six datasets analyzed had a very big difference between the human estimate of quality and the automated estimate, which, upon investigation, revealed that there were significant conceptual overlaps in the classes that the recruiters had trained, and the answers given. So, indeed, the classifier was making surprisingly adequate guesses, but which were very low-confidence. Following the intervention described in the previous section, which includes ongoing communication of any outstanding problems by project managers to recruiter teams, this type of error became rare and quickly-addressed. Metric can be used for Internal and External Comparison We used the nex-cv metric to help compare the performance of our classification component with two leading vendors for general-purpose chatbot development. Fig. FIGREF18 shows the comparison between jobpal and 2 leading vendors in the space. The three settings of the metric were aggregated to provide a plausible range of estimated performance. The range of accuracy was significantly higher for our domain-specific classifier, than those trained using general-purpose tools. Aside from being useful to classify into known classes, the metric must account for fallback or escalation. This may be modeled as a separate class (as one of the external engines does with the “fallback” intent), or by relying on confidence scores from classifiers that produce measures of confidence (all engines provide some estimate of confidence that may be used). The “carefulness” score was included to represent how useful the confidence score is for deciding when to decline an answer: the number of incorrect guesses that were rejected due to too-low confidence scores divided by total no-answer-given cases (no guess or low-confidence guess). Fig. FIGREF18 shows that the performance of our ML/NLP component on our domain-specific dataset is better than that of two popular general-purpose platforms, both in terms of classification accuracy, and rate of deferral due to low-confidence answers. This comparison mechanism validates our system relative to existing external services in a way that is interpretable by various internal stakeholders, not only the developer staff. Conclusion We described and validated the nex-cv metric, which is a modification of cross-validation that makes use of plausible negative examples from low-population classes in the datasets typical of our application area and domain. Existing chatbot guidelines leave error handling to the designer: “transparency” is included as an important topic BIBREF3 , but, in practice, why something does not work, and under what conditions, can puzzle designers and developers, not just end-users. We presented on a metric that can be used by a variety of relevant stakeholders to understand, communicate, and improve text classifier performance by improving data quality. In future work, we aim to explore other text classifier and chatbot evaluation strategies, keeping in mind the needs for understandability and transparency in this multi-stakeholder design process and maintenance practice.
Unanswerable
e91692136033bbc3f19743d0ee5784365746a820
e91692136033bbc3f19743d0ee5784365746a820_0
Q: It looks like learning to paraphrase questions, a neural scoring model and a answer selection model cannot be trained end-to-end. How are they trained? Text: Introduction Enabling computers to automatically answer questions posed in natural language on any domain or topic has been the focus of much research in recent years. Question answering (QA) is challenging due to the many different ways natural language expresses the same information need. As a result, small variations in semantically equivalent questions, may yield different answers. For example, a hypothetical QA system must recognize that the questions “who created microsoft” and “who started microsoft” have the same meaning and that they both convey the founder relation in order to retrieve the correct answer from a knowledge base. Given the great variety of surface forms for semantically equivalent expressions, it should come as no surprise that previous work has investigated the use of paraphrases in relation to question answering. There have been three main strands of research. The first one applies paraphrasing to match natural language and logical forms in the context of semantic parsing. BIBREF0 use a template-based method to heuristically generate canonical text descriptions for candidate logical forms, and then compute paraphrase scores between the generated texts and input questions in order to rank the logical forms. Another strand of work uses paraphrases in the context of neural question answering models BIBREF1 , BIBREF2 , BIBREF3 . These models are typically trained on question-answer pairs, and employ question paraphrases in a multi-task learning framework in an attempt to encourage the neural networks to output similar vector representations for the paraphrases. The third strand of research uses paraphrases more directly. The idea is to paraphrase the question and then submit the rewritten version to a QA module. Various resources have been used to produce question paraphrases, such as rule-based machine translation BIBREF4 , lexical and phrasal rules from the Paraphrase Database BIBREF5 , as well as rules mined from Wiktionary BIBREF6 and large-scale paraphrase corpora BIBREF7 . A common problem with the generated paraphrases is that they often contain inappropriate candidates. Hence, treating all paraphrases as equally felicitous and using them to answer the question could degrade performance. To remedy this, a scoring model is often employed, however independently of the QA system used to find the answer BIBREF4 , BIBREF5 . Problematically, the separate paraphrase models used in previous work do not fully utilize the supervision signal from the training data, and as such cannot be properly tuned to the question answering tasks at hand. Based on the large variety of possible transformations that can generate paraphrases, it seems likely that the kinds of paraphrases that are useful would depend on the QA application of interest BIBREF8 . BIBREF9 use features that are defined over the original question and its rewrites to score paraphrases. Examples include the pointwise mutual information of the rewrite rule, the paraphrase's score according to a language model, and POS tag features. In the context of semantic parsing, BIBREF6 also use the ID of the rewrite rule as a feature. However, most of these features are not informative enough to model the quality of question paraphrases, or cannot easily generalize to unseen rewrite rules. In this paper, we present a general framework for learning paraphrases for question answering tasks. Given a natural language question, our model estimates a probability distribution over candidate answers. We first generate paraphrases for the question, which can be obtained by one or several paraphrasing systems. A neural scoring model predicts the quality of the generated paraphrases, while learning to assign higher weights to those which are more likely to yield correct answers. The paraphrases and the original question are fed into a QA model that predicts a distribution over answers given the question. The entire system is trained end-to-end using question-answer pairs as a supervision signal. The framework is flexible, it does not rely on specific paraphrase or QA models. In fact, this plug-and-play functionality allows to learn specific paraphrases for different QA tasks and to explore the merits of different paraphrasing models for different applications. We evaluate our approach on QA over Freebase and text-based answer sentence selection. We employ a range of paraphrase models based on the Paraphrase Database (PPDB; BIBREF10 ), neural machine translation BIBREF11 , and rules mined from the WikiAnswers corpus BIBREF9 . Results on three datasets show that our framework consistently improves performance; it achieves state-of-the-art results on GraphQuestions and competitive performance on two additional benchmark datasets using simple QA models. Problem Formulation Let $q$ denote a natural language question, and $a$ its answer. Our aim is to estimate $p\left(a | q\right)$ , the conditional probability of candidate answers given the question. We decompose $p\left(a | q\right)$ as: $$\hspace*{-5.69046pt}p\left(a | q\right) = \sum _{q^{\prime } \in {H_q \cup \lbrace q\rbrace }} { \underbrace{p_{\psi }\left(a | q^{\prime } \right)}_{\text{~~~~QA Model~~~~}} \underbrace{p_{\theta }\left(q^{\prime } | q\right)}_{\text{Paraphrase Model}} }$$ (Eq. 2) where $H_q$ is the set of paraphrases for question $q$ , $\psi $ are the parameters of a QA model, and $\theta $ are the parameters of a paraphrase scoring model. As shown in Figure 1 , we first generate candidate paraphrases $H_q$ for question $q$ . Then, a neural scoring model predicts the quality of the generated paraphrases, and assigns higher weights to the paraphrases which are more likely to obtain the correct answers. These paraphrases and the original question simultaneously serve as input to a QA model that predicts a distribution over answers for a given question. Finally, the results of these two models are fused to predict the answer. In the following we will explain how $p\left(q^{\prime } | q\right)$ and $p\left(a | q^{\prime }\right)$ are estimated. Paraphrase Generation As shown in Equation ( 2 ), the term $p\left(a | q\right)$ is the sum over $q$ and its paraphrases $H_q$ . Ideally, we would generate all the paraphrases of $q$ . However, since this set could quickly become intractable, we restrict the number of candidate paraphrases to a manageable size. In order to increase the coverage and diversity of paraphrases, we employ three methods based on: (1) lexical and phrasal rules from the Paraphrase Database BIBREF10 ; (2) neural machine translation models BIBREF12 , BIBREF13 ; and (3) paraphrase rules mined from clusters of related questions BIBREF9 . We briefly describe these models below, however, there is nothing inherent in our framework that is specific to these, any other paraphrase generator could be used instead. Bilingual pivoting BIBREF14 is one of the most well-known approaches to paraphrasing; it uses bilingual parallel corpora to learn paraphrases based on techniques from phrase-based statistical machine translation (SMT, BIBREF15 ). The intuition is that two English strings that translate to the same foreign string can be assumed to have the same meaning. The method first extracts a bilingual phrase table and then obtains English paraphrases by pivoting through foreign language phrases. Drawing inspiration from syntax-based SMT, BIBREF16 and BIBREF17 extended this idea to syntactic paraphrases, leading to the creation of PPDB BIBREF18 , a large-scale paraphrase database containing over a billion of paraphrase pairs in 24 different languages. BIBREF10 further used a supervised model to automatically label paraphrase pairs with entailment relationships based on natural logic BIBREF19 . In our work, we employ bidirectionally entailing rules from PPDB. Specifically, we focus on lexical (single word) and phrasal (multiword) rules which we use to paraphrase questions by replacing words and phrases in them. An example is shown in Table 1 where we substitute car with vehicle and manufacturer with producer. BIBREF11 revisit bilingual pivoting in the context of neural machine translation (NMT, BIBREF12 , BIBREF13 ) and present a paraphrasing model based on neural networks. At its core, NMT is trained end-to-end to maximize the conditional probability of a correct translation given a source sentence, using a bilingual corpus. Paraphrases can be obtained by translating an English string into a foreign language and then back-translating it into English. NMT-based pivoting models offer advantages over conventional methods such as the ability to learn continuous representations and to consider wider context while paraphrasing. In our work, we select German as our pivot following BIBREF11 who show that it outperforms other languages in a wide range of paraphrasing experiments, and pretrain two NMT systems, English-to-German (En-De) and German-to-English (De-En). A naive implementation would translate a question to a German string and then back-translate it to English. However, using only one pivot can lead to inaccuracies as it places too much faith on a single translation which may be wrong. Instead, we translate from multiple pivot sentences BIBREF11 . As shown in Figure 2 , question $q$ is translated to $K$ -best German pivots, $\mathcal {G}_q = \lbrace g_1 , \dots , g_K \rbrace $ . The probability of generating paraphrase $q^{\prime } = y_1 \dots y_{|q^{\prime }|}$ is decomposed as: $$\begin{aligned} p\left( q^{\prime } | \mathcal {G}_q \right) &= \prod _{ t=1 }^{ |q^{\prime }| }{ p\left( y_t | y_{<t} , \mathcal {G}_q \right) } \\ &= \prod _{ t=1 }^{ |q^{\prime }| }{ \sum _{ k=1 }^{ K }{ p\left( g_k | q \right) p\left( y_{ t }|y_{ <t }, g_k \right) } } \end{aligned}$$ (Eq. 8) where $y_{<t} = y_1 , \dots , y_{t-1}$ , and $|q^{\prime }|$ is the length of $q^{\prime }$ . Probabilities $p\left( g_k | q \right)$ and $p\left( y_{ t }|y_{ <t }, g_k \right)$ are computed by the En-De and De-En models, respectively. We use beam search to decode tokens by conditioning on multiple pivoting sentences. The results with the best decoding scores are considered candidate paraphrases. Examples of NMT paraphrases are shown in Table 1 . Compared to PPDB, NMT-based paraphrases are syntax-agnostic, operating on the surface level without knowledge of any underlying grammar. Furthermore, paraphrase rules are captured implicitly and cannot be easily extracted, e.g., from a phrase table. As mentioned earlier, the NMT-based approach has the potential of performing major rewrites as paraphrases are generated while considering wider contextual information, whereas PPDB paraphrases are more local, and mainly handle lexical variation. Our third paraphrase generation approach uses rules mined from the WikiAnswers corpus BIBREF9 which contains more than 30 million question clusters labeled as paraphrases by WikiAnswers users. This corpus is a large resource (the average cluster size is 25), but is relatively noisy due to its collaborative nature – $45\%$ of question pairs are merely related rather than genuine paraphrases. We therefore followed the method proposed in BIBREF7 to harvest paraphrase rules from the corpus. We first extracted question templates (i.e., questions with at most one wild-card) that appear in at least ten clusters. Any two templates co-occurring (more than five times) in the same cluster and with the same arguments were deemed paraphrases. Table 2 shows examples of rules extracted from the corpus. During paraphrase generation, we consider substrings of the input question as arguments, and match them with the mined template pairs. For example, the stemmed input question in Table 1 can be paraphrased using the rules (“what be the zip code of __”, “what be __ 's postal code”) and (“what be the zip code of __”, “zip code of __”). If no exact match is found, we perform fuzzy matching by ignoring stop words in the question and templates. Paraphrase Scoring Recall from Equation ( 2 ) that $p_{\theta }\left(q^{\prime } | q\right)$ scores the generated paraphrases $q^{\prime } \in H_q \cup \lbrace q\rbrace $ . We estimate $p_{\theta }\left(q^{\prime } | q\right)$ using neural networks given their successful application to paraphrase identification tasks BIBREF20 , BIBREF21 , BIBREF22 . Specifically, the input question and its paraphrases are encoded as vectors. Then, we employ a neural network to obtain the score $s\left(q^{\prime } | q\right)$ which after normalization becomes the probability $p_{\theta }\left(q^{\prime } | q\right)$ . Let $q = q_1 \dots q_{|q|}$ denote an input question. Every word is initially mapped to a $d$ -dimensional vector. In other words, vector $\mathbf {q}_t$ is computed via $\mathbf {q}_t = \mathbf {W}_q \mathbf {e}\left( q_t \right) $ , where $\mathbf {W}_q \in \mathbb {R}^{d \times |\mathcal {V}|}$ is a word embedding matrix, $|\mathcal {V}|$ is the vocabulary size, and $\mathbf {e}\left( q_t \right)$ is a one-hot vector. Next, we use a bi-directional recurrent neural network with long short-term memory units (LSTM, BIBREF23 ) as the question encoder, which is shared by the input questions and their paraphrases. The encoder recursively processes tokens one by one, and uses the encoded vectors to represent questions. We compute the hidden vectors at the $t$ -th time step via: $$\begin{aligned} \overrightarrow{\mathbf {h} }_{ t } &= \operatornamewithlimits{\textrm {LSTM}}\left( \overrightarrow{\mathbf {h} }_{ t-1 } , \mathbf {q}_t \right) , t = 1, \dots , |q| \\ \overleftarrow{\mathbf {h} }_{ t } &= \operatornamewithlimits{\textrm {LSTM}}\left( \overleftarrow{\mathbf {h} }_{ t+1 } , \mathbf {q}_t \right) , t = |q|, \dots , 1 \end{aligned}$$ (Eq. 14) where $\overrightarrow{\mathbf {h} }_{ t }, \overleftarrow{\mathbf {h} }_{ t } \in \mathbb {R}^{n}$ . In this work we follow the $\operatornamewithlimits{\textrm {LSTM}}$ function described in BIBREF24 . The representation of $q$ is obtained by: $$\mathbf {q} = \left[ \overrightarrow{\mathbf {h} }_{ |q| } , \overleftarrow{\mathbf {h} }_{ 1 } \right]$$ (Eq. 15) where $[\cdot ,\cdot ]$ denotes concatenation, and $\mathbf {q} \in \mathbb {R}^{2n}$ . After obtaining vector representations for $q$ and $q^{\prime }$ , we compute the score $s\left(q^{\prime } | q\right)$ via: $$s\left(q^{\prime } | q\right) = \mathbf {w}_s \cdot \left[ \mathbf {q} , \mathbf {q^{\prime }} , \mathbf {q} \odot \mathbf {q^{\prime }} \right] + b_s$$ (Eq. 17) where $\mathbf {w}_s \in \mathbb {R}^{6n}$ is a parameter vector, $[\cdot ,\cdot ,\cdot ]$ denotes concatenation, $\odot $ is element-wise multiplication, and $b_s$ is the bias. Alternative ways to compute $s\left(q^{\prime } | q\right)$ such as dot product or with a bilinear term were not empirically better than Equation ( 17 ) and we omit them from further discussion. For paraphrases $q^{\prime } \in H_q \cup \lbrace q\rbrace $ , the probability $p_{\theta }\left(q^{\prime } | q\right)$ is computed via: $$p_{\theta }\left(q^{\prime } | q\right) = \frac{\exp \lbrace s\left(q^{\prime } | q\right)\rbrace }{\sum _{ r \in H_q \cup \lbrace q\rbrace }{ \exp \lbrace s\left(r | q\right)\rbrace }}$$ (Eq. 19) where the paraphrase scores are normalized over the set $H_q \cup \lbrace q\rbrace $ . QA Models The framework defined in Equation ( 2 ) is relatively flexible with respect to the QA model being employed as long as it can predict $p_{\psi }\left(a | q^{\prime } \right)$ . We illustrate this by performing experiments across different tasks and describe below the models used for these tasks. In our first task we use the Freebase knowledge base to answer questions. Query graphs for the questions typically contain more than one predicate. For example, to answer the question “who is the ceo of microsoft in 2008”, we need to use one relation to query “ceo of microsoft” and another relation for the constraint “in 2008”. For this task, we employ the SimpleGraph model described in BIBREF25 , BIBREF26 , and follow their training protocol and feature design. In brief, their method uses rules to convert questions to ungrounded logical forms, which are subsequently matched against Freebase subgraphs. The QA model learns from question-answer pairs: it extracts features for pairs of questions and Freebase subgraphs, and uses a logistic regression classifier to predict the probability that a candidate answer is correct. We perform entity linking using the Freebasee/KG API on the original question BIBREF25 , BIBREF26 , and generate candidate Freebase subgraphs. The QA model estimates how likely it is for a subgraph to yield the correct answer. Given a question and a collection of relevant sentences, the goal of this task is to select sentences which contain an answer to the question. The assumption is that correct answer sentences have high semantic similarity to the questions BIBREF27 , BIBREF28 , BIBREF29 . We use two bi-directional recurrent neural networks (BiLSTM) to separately encode questions and answer sentences to vectors (Equation ( 15 )). Similarity scores are computed as shown in Equation ( 17 ), and then squashed to $(0,1)$ by a $\operatorname{\textrm {sigmoid}}$ function in order to predict $p_{\psi }\left(a | q^{\prime } \right)$ . Training and Inference We use a log-likelihood objective for training, which maximizes the likelihood of the correct answer given a question: $$\operatorname{\textrm {maximize}}\sum _{(q , a) \in \mathcal {D} }{ \log {p \left( a | q \right)}}$$ (Eq. 24) where $\mathcal {D}$ is the set of all question-answer training pairs, and $p \left( a | q \right)$ is computed as shown in Equation ( 2 ). For the knowledge base QA task, we predict how likely it is that a subgraph obtains the correct answer, and the answers of some candidate subgraphs are partially correct. So, we use the binary cross entropy between the candidate subgraph's F1 score and the prediction as the objective function. The RMSProp algorithm BIBREF30 is employed to solve this non-convex optimization problem. Moreover, dropout is used for regularizing the recurrent neural networks BIBREF24 . At test time, we generate paraphrases for the question $q$ , and then predict the answer by: $$\hat{a} = \operatornamewithlimits{arg\,max}_{a^{\prime } \in \mathcal {C}_q}{ p \left( a^{\prime } | q \right) }$$ (Eq. 25) where $\mathcal {C}_q$ is the set of candidate answers, and $p \left( a^{\prime } | q \right)$ is computed as shown in Equation ( 2 ). Experiments We compared our model which we call Para4QA (as shorthand for learning to paraphrase for question answering) against multiple previous systems on three datasets. In the following we introduce these datasets, provide implementation details for our model, describe the systems used for comparison, and present our results. Datasets Our model was trained on three datasets, representative of different types of QA tasks. The first two datasets focus on question answering over a structured knowledge base, whereas the third one is specific to answer sentence selection. This dataset BIBREF31 contains $3,778$ training instances and $2,032$ test instances. Questions were collected by querying the Google Suggest API. A breadth-first search beginning with wh- was conducted and the answers were crowd-sourced using Freebase as the backend knowledge base. The dataset BIBREF32 contains $5,166$ question-answer pairs (evenly split into a training and a test set). It was created by asking crowd workers to paraphrase 500 Freebase graph queries in natural language. This dataset BIBREF28 has $3,047$ questions sampled from Bing query logs. The questions are associated with $29,258$ candidate answer sentences, $1,473$ of which contain the correct answers to the questions. Implementation Details Candidate paraphrases were stemmed BIBREF33 and lowercased. We discarded duplicate or trivial paraphrases which only rewrite stop words or punctuation. For the NMT model, we followed the implementation and settings described in BIBREF11 , and used English $\leftrightarrow $ German as the language pair. The system was trained on data released as part of the WMT15 shared translation task ( $4.2$ million sentence pairs). We also had access to back-translated monolingual training data BIBREF34 . Rare words were split into subword units BIBREF35 to handle out-of-vocabulary words in questions. We used the top 15 decoding results as candidate paraphrases. We used the S size package of PPDB 2.0 BIBREF10 for high precision. At most 10 candidate paraphrases were considered. We mined paraphrase rules from WikiAnswers BIBREF9 as described in Section UID9 . The extracted rules were ranked using the pointwise mutual information between template pairs in the WikiAnswers corpus. The top 10 candidate paraphrases were used. For the paraphrase scoring model, we used GloVe BIBREF36 vectors pretrained on Wikipedia 2014 and Gigaword 5 to initialize the word embedding matrix. We kept this matrix fixed across datasets. Out-of-vocabulary words were replaced with a special unknown symbol. We also augmented questions with start-of- and end-of-sequence symbols. Word vectors for these special symbols were updated during training. Model hyperparameters were validated on the development set. The dimensions of hidden vectors and word embeddings were selected from $\lbrace 50,100,200\rbrace $ and $\lbrace 100,200\rbrace $ , respectively. The dropout rate was selected from $\lbrace 0.2,0.3,0.4\rbrace $ . The BiLSTM for the answer sentence selection QA model used the same hyperparameters. Parameters were randomly initialized from a uniform distribution $\mathcal {U}\left(-0.08,0.08\right)$ . The learning rate and decay rate of RMSProp were $0.01$ and $0.95$ , respectively. The batch size was set to 150. To alleviate the exploding gradient problem BIBREF37 , the gradient norm was clipped to 5. Early stopping was used to determine the number of epochs. Paraphrase Statistics Table 3 presents descriptive statistics on the paraphrases generated by the various systems across datasets (training set). As can be seen, the average paraphrase length is similar to the average length of the original questions. The NMT method generates more paraphrases and has wider coverage, while the average number and coverage of the other two methods varies per dataset. As a way of quantifying the extent to which rewriting takes place, we report BLEU BIBREF38 and TER BIBREF39 scores between the original questions and their paraphrases. The NMT method and the rules extracted from WikiAnswers tend to paraphrase more (i.e., have lower BLEU and higher TER scores) compared to PPDB. Comparison Systems We compared our framework to previous work and several ablation models which either do not use paraphrases or paraphrase scoring, or are not jointly trained. The first baseline only uses the base QA models described in Section "QA Models" (SimpleGraph and BiLSTM). The second baseline (AvgPara) does not take advantage of paraphrase scoring. The paraphrases for a given question are used while the QA model's results are directly averaged to predict the answers. The third baseline (DataAugment) employs paraphrases for data augmentation during training. Specifically, we use the question, its paraphrases, and the correct answer to automatically generate new training samples. In the fourth baseline (SepPara), the paraphrase scoring model is separately trained on paraphrase classification data, without taking question-answer pairs into account. In the experiments, we used the Quora question paraphrase dataset which contains question pairs and labels indicating whether they constitute paraphrases or not. We removed questions with more than 25 tokens and sub-sampled to balance the dataset. We used $90\%$ of the resulting 275K examples for training, and the remaining for development. The paraphrase score $s\left(q^{\prime } | q\right)$ (Equation ( 17 )) was wrapped by a $\operatorname{\textrm {sigmoid}}$ function to predict the probability of a question pair being a paraphrase. A binary cross-entropy loss was used as the objective. The classification accuracy on the dev set was $80.6\%$ . Finally, in order to assess the individual contribution of different paraphrasing resources, we compared the Para4QA model against versions of itself with one paraphrase generator removed ( $-$ NMT/ $-$ PPDB/ $-$ Rule). Results We first discuss the performance of Para4QA on GraphQuestions and WebQuestions. The first block in Table 4 shows a variety of systems previously described in the literature using average F1 as the evaluation metric BIBREF31 . Among these, ParaSemp, SubGraph, MCCNN, and BiLayered utilize paraphrasing resources. The second block compares Para4QA against various related baselines (see Section "Comparison Systems" ). SimpleGraph results on WebQuestions and GraphQuestions are taken from BIBREF25 and BIBREF26 , respectively. Overall, we observe that Para4QA outperforms baselines which either do not employ paraphrases (SimpleGraph) or paraphrase scoring (AvgPara, DataAugment), or are not jointly trained (SepPara). On GraphQuestions, our model Para4QA outperforms the previous state of the art by a wide margin. Ablation experiments with one of the paraphrase generators removed show that performance drops most when the NMT paraphrases are not used on GraphQuestions, whereas on WebQuestions removal of the rule-based generator hurts performance most. One reason is that the rule-based method has higher coverage on WebQuestions than on GraphQuestions (see Table 3 ). Results on WikiQA are shown in Table 5 . We report MAP and MMR which evaluate the relative ranks of correct answers among the candidate sentences for a question. Again, we observe that Para4QA outperforms related baselines (see BiLSTM, DataAugment, AvgPara, and SepPara). Ablation experiments show that performance drops most when NMT paraphrases are removed. When word matching features are used (see +cnt in the third block), Para4QA reaches state of the art performance. Examples of paraphrases and their probabilities ${p_{\theta }\left(q^{\prime } | q\right)}$ (see Equation ( 19 )) learned by Para4QA are shown in Table 6 . The two examples are taken from the development set of GraphQuestions and WebQuestions, respectively. We also show the Freebase relations used to query the correct answers. In the first example, the original question cannot yield the correct answer because of the mismatch between the question and the knowledge base. The paraphrase contains “role” in place of “sort of part”, increasing the chance of overlap between the question and the predicate words. The second question contains an informal expression “play 4”, which confuses the QA model. The paraphrase model generates “play for” and predicts a high paraphrase score for it. More generally, we observe that the model tends to give higher probabilities ${p_{\theta }\left(q^{\prime } | q\right)}$ to paraphrases biased towards delivering appropriate answers. We also analyzed which structures were mostly paraphrased within a question. We manually inspected 50 (randomly sampled) questions from the development portion of each dataset, and their three top scoring paraphrases (Equation ( 17 )). We grouped the most commonly paraphrased structures into the following categories: a) question words, i.e., wh-words and and “how”; b) question focus structures, i.e., cue words or cue phrases for an answer with a specific entity type BIBREF40 ; c) verbs or noun phrases indicating the relation between the question topic entity and the answer; and d) structures requiring aggregation or imposing additional constraints the answer must satisfy BIBREF43 . In the example “which year did Avatar release in UK”, the question word is “which”, the question focus is “year”, the verb is “release”, and “in UK” constrains the answer to a specific location. Figure 3 shows the degree to which different types of structures are paraphrased. As can be seen, most rewrites affect Relation Verb, especially on WebQuestions. Question Focus, Relation NP, and Constraint & Aggregation are more often rewritten in GraphQuestions compared to the other datasets. Finally, we examined how our method fares on simple versus complex questions. We performed this analysis on GraphQuestions as it contains a larger proportion of complex questions. We consider questions that contain a single relation as simple. Complex questions have multiple relations or require aggregation. Table 7 shows how our model performs in each group. We observe improvements for both types of questions, with the impact on simple questions being more pronounced. This is not entirely surprising as it is easier to generate paraphrases and predict the paraphrase scores for simpler questions. Conclusions In this work we proposed a general framework for learning paraphrases for question answering. Paraphrase scoring and QA models are trained end-to-end on question-answer pairs, which results in learning paraphrases with a purpose. The framework is not tied to a specific paraphrase generator or QA system. In fact it allows to incorporate several paraphrasing modules, and can serve as a testbed for exploring their coverage and rewriting capabilities. Experimental results on three datasets show that our method improves performance across tasks. There are several directions for future work. The framework can be used for other natural language processing tasks which are sensitive to the variation of input (e.g., textual entailment or summarization). We would also like to explore more advanced paraphrase scoring models BIBREF48 , BIBREF49 as well as additional paraphrase generators since improvements in the diversity and the quality of paraphrases could also enhance QA performance.
using multiple pivot sentences
94e17980435aaa9fc3b5328f16f3368dc8a736bd
94e17980435aaa9fc3b5328f16f3368dc8a736bd_0
Q: What multimodal representations are used in the experiments? Text: Introduction Language understanding is a task proving difficult to automatize, because, among other factors, much of the information that is needed for the correct interpretation of an utterance is not explicit in text BIBREF0. This contrasts with how natural is language understanding for humans, who can cope easily with information absent in text, using common sense and background knowledge like, for instance, typical spatial relations between objects. From another perspective, it is well-known that the visual modality provides complementary information to that in the text. In fact, recent advances in deep learning research have led the field of computer vision and natural language processing to significant progress in tasks that involve visual and textual understanding. Tasks that include visual and textual content include Image Captioning BIBREF1, Visual Question Answering BIBREF2, and Visual Machine Translation BIBREF3, among others. On the other hand, progress in language understanding has been driven by datasets which measure the quality of sentence representations, specially those where inference tasks are performed on top of sentence representations, including textual entailment BIBREF4, BIBREF5 and semantic textual similarity (STS). In STS BIBREF6, for instance, pairs of sentences have been annotated with similarity scores, with top scores for semantically equivalent sentences and bottom scores for completely unrelated sentences. STS provides a unified framework for extrinsic evaluation of multiple semantic aspects such as compositionality and phrase similarity. Contrary to related tasks, such as textual entailment and paraphrase detection, STS incorporates the notion of graded semantic similarity between the pair of textual sentences and is symmetric. In this paper we extend STS to the visual modality, and present Visual Semantic Textual Similarity (vSTS), a task and dataset which allows to study whether better sentence representations can be built when having access to the corresponding images, in contrast with having access to the text alone. Similar to STS, annotators were asked to score the similarity between two items, but in this case each item comprises an image and a textual caption. Systems need to predict the human score. Figure FIGREF1 shows an instance in the dataset, with similarity scores in the captions. The example illustrates the need to re-score the similarity values, as the text-only similarity is not applicable to the multimodal version of the dataset: the annotators return a low similarity when using only text, while, when having access to the corresponding image, they return a high similarity. Although a dataset for multimodal inference exists (visual textual entailment BIBREF7) that dataset reused the text-only inference labels. The vSTS dataset aims to become a standard benchmark to test the contribution of visual information when evaluating the similarity of sentences and the quality of multimodal representations, allowing to test the complementarity of visual and textual information for improved language understanding. Although multimodal tasks such as image captioning, visual question answering and visual machine translation already show that the combination of both modalities can be effectively used, those tasks do not separately benchmark the inference capabilities of multimodal visual and textual representations. We evaluate a variety of well-known textual, visual and multimodal representations in supervised and unsupervised scenarios, and systematically explore if visual content is useful for sentence similarity. For text, we studied pre-trained word embeddings such as GloVe BIBREF8, pre-trained language models like GPT-2 and BERT BIBREF9, BIBREF10, sentence representations fine-tuned on an entailment task like USE BIBREF11, and textual representations pre-trained on a multimodal caption retrieval task like VSE++ BIBREF12. For image representation we use a model pre-trained on Imagenet (ResNet BIBREF13). In order to combine visual and textual representations we used concatenation and learn simple projections. Our experiments show that the text-only models are outperformed by their multimodal counterparts when adding visual representations, with up to 24% error reduction. Our contributions are the following: (1) We present a dataset which allows to evaluate visual/textual representations on an inference task. The dataset is publicly available under a free license. (2) Our results show, for the first time, that the addition of image representations allows better inference. (3) The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. (4) The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. At the same time the improvement holds for all textual representations, even those fine-tuned on a similarity task. Related Work The task of Visual Semantic Textual Similarity stems from previous work on textual inference tasks. In textual entailment, given a textual premise and a textual hypothesis, systems need to decide whether the first entails the second, they are in contradiction, or none of the previous BIBREF4. Popular datasets include the Stanford Natural Language Inference dataset BIBREF5. As an alternative to entailment, STS datasets comprise pairs of sentences which have been annotated with similarity scores. STS systems are usually evaluated on the STS benchmark dataset BIBREF6. In this paper we present an extension of STS, so we present the task in more detail in the next section. Textual entailment has been recently extended with visual information. A dataset for visual textual entailment was presented in BIBREF7. Even if the task is different from the text-only counterpart, they reused the text-only inference ground-truth labels without re-annotating them. In fact, they annotate a small sample to show that the labels change. In addition, their dataset tested pairs of text snippets referring to a single image, and it was only useful for testing grounding techniques, but not to measure the complementarity of visual and textual representations. The reported results did not show that grounding improves results, while our study shows that the inference capabilities of multimodal visual and textual representations improve over text-only representations. In related work, BIBREF14 propose visual entailment, where the premise is an image and the hypothesis is textual. The chosen setting does not allow to test the contribution of multimodal representationn with respect to unimodal ones. The complementarity of visual and text representations for improved language understanding was first proven on word representations, where word embeddings were combined with visual or perceptual input to produce multimodal representations BIBREF15. The task of Visual Semantic Textual Similarity is also related to other multimodal tasks such as Image Captioning BIBREF16, BIBREF17, Text-Image Retrieval BIBREF18, BIBREF19 and Visual Question Answering BIBREF2. Image Captioning is a task that aims to generate a description of a given image. The task is related to ours in that it is required an understanding of the scene depicted in the image, so the system can generate an accurate description of it. Unlike vSTS, image captioning is a generation task in which evaluation is challenging and unclear, as the defined automatic metrics are somewhat problematic BIBREF20. On the other hand, Text-Image Retrieval task requires to find similarities and differences of the items in two modalities, so we can distinguish relevant and irrelevant texts and images regarding the query. Apart from not checking inference explicitly, the other main difference with regards to vSTS is that, in retrieval, items are ranked from most to least similar, whereas the vSTS task consists on scoring an accurate real valued similarity. A comprehensive overview is out of the scope, and thus we focus on the most related vision and language tasks. We refer the reader to BIBREF21 for a survey on vision and language research. Many of these tasks can be considered as extensions of previously existing NLP taks. For instance, Image Captioning can be seen as an extension of conditional language modeling BIBREF22 or natural language generation BIBREF23, whereas Visual Question Answering is a natural counterpart of the traditional Question Answering in NLP. Regarding multimodal and unimodal representation learning, convolutional neural networks (CNN) have become the standard architecture for generating representations for images BIBREF24. Most of these models learn transferable general image features in tasks such as image classification, and detection, semantic segmentation, and action recognition. Most used transferable global image representations are learned with deep CNN architectures such as AlexNet BIBREF25, VGG BIBREF26, Inception-v3 BIBREF27, and ResNet BIBREF13 using large datasets such as ImageNet BIBREF1, MSCOCO BIBREF28 and Visual Genome BIBREF29. Recently, Graph Convolution Networks (GCN) showed to be promising way to distill multiple input types multimodal representations BIBREF30. Language representation is mostly done with pretrained word embeddings like Glove BIBREF8 and sequence learning techniques such as Recurrent Neural Networks (RNN) BIBREF31. Recently, self-attention approaches like Transformers BIBREF32 provided transferable models (BERT, GPT-2, among others BIBREF9, BIBREF10) that significantly improve many state-of-the-art tasks in NLP. Alternatively, sentence representations have been fine-tuned on an entailment task BIBREF11. We will present those used in our work in more detail below. The Visual STS Dataset STS assesses the degree to which two sentences are semantically equivalent to each other. The annotators measure the similarity among sentences, with higher scores for more similar sentences. The annotations of similarity were guided by the scale in Table TABREF4, ranging from 0 for no meaning overlap to 5 for meaning equivalence. Intermediate values reflect interpretable levels of partial overlap in meaning. In this work, we extend the STS task with images, providing visual information that models use, and assess how much visual content can contribute in a language understanding task. The input of the task now consists of two items, each comprising an image and its corresponding caption. In the same way as in STS, systems need to score the similarity of the sentences with the help of the images. Figure FIGREF1 shows an example of an instance in the dataset. In previous work reported in a non-archival workshop paper BIBREF33, we presented a preliminary dataset which used the text-only ground-truth similarity scores. The 819 pairs were extracted from a subset of the STS benchmark, more specifically, the so called STS-images subset, which contains pairs of captions with access to images from PASCAL VOC-2008 BIBREF34 and Flickr-8K BIBREF35. Our manual analysis, including examples like Figure FIGREF1, showed that in many cases the text-only ground truth was not valid, so we decided to re-annotated the dataset but showing the images in addition to the captions (the methodology is identical to the AMT annotation method mentioned below). The correlation of the new annotations with regard to the old ones was high (0.9$\rho $) showing that the change in scores was not drastic, but that annotations did differ. The annotators tended to return higher similarity scores, as the mean similarity score across the dataset increased from 1.7 to 2.1. The inter-tagger correlation was comparable to the text-only task, showing that the new annotation task was well-defined. From another perspective, the fact that we could only extract 819 pairs from existing STS datasets showed the need to sample new pairs from other image-caption datasets. In order to be effective in measuring the quality of multimodal representations, we defined the following desiderata for the new dataset: (1) Following STS datasets, the similarity values need to be balanced, showing a uniform distribution; (2) Paired images have to be different to avoid making the task trivial, as hand analysis of image-caption datasets showed that two captions of the same image tended to be paraphrases of each other; (3) The images should not be present in more than one instance, to avoid biases in the visual side; (4) It has to contain a wide variety of images so we can draw stronger conclusions. The preliminary dataset fulfilled 2 and 3, but the dataset was skewed towards low similarity values and the variety was limited. The Visual STS Dataset ::: Data Collection The data collection of sentence-image pairs comprised several steps, including the selection of pairs to be annotated, the annotation methodology, and a final filtering stage. The Visual STS Dataset ::: Data Collection ::: 1. Sampling data for manual annotation. We make use of two well-known image-caption datasets. On one hand, Flickr30K dataset BIBREF36 that has about 30K images with 5 manually generated captions per image. On the other hand, we use the Microsoft COCO dataset BIBREF28, which contains more than 120K images and 5 captions per image. Using both sources we hope to cover a wide variety of images. In order to select pairs of instances, we did two sampling rounds. The goal of the first run is to gather a large number of varied image pairs with their captions which contain interesting pairs. We started by sampling images. We then combined two ways of sampling pairs of images. In the first, we generated pairs by sampling the images randomly. This way, we ensure higher variety of paired scenes, but presumably two captions paired at random will tend to have very low similarity. In the second, we paired images taking into account their visual similarity, ensuring the selection of related scenes with a higher similarity rate. We used the cosine distance of the top-layer of a pretrained ResNet-50 BIBREF13 to compute the similarity of images. We collected an equal number of pairs for the random and visual similarity strategy, gathering, in total, $155,068$ pairs. As each image has 5 captions, we had to select one caption for each image, and we decided to select the two captions with highest word overlap. This way, we get more balanced samples in terms of caption similarity. The initial sampling created thousands of pairs that were skewed towards very low similarity values. Given that manual annotation is a costly process, and with the goal of having a balanced dataset, we used an automatic similarity system to score all the pairs. This text-only similarity system is an ensemble of feature-based machine learning systems that uses a large variety of distance and machine-translation based features. The model was evaluated on a subset of STS benchmark dataset BIBREF6 and compared favorably to other baseline models. As this model is very different from current deep learning techniques, it should not bias the dataset sampling in a way which influences current similarity systems. The automatic scores were used to sample the final set of pairs as follows. We defined five similarity ranges ($(0, 1], \ldots ,(4, 5]$) and randomly selected the same amount of pairs from the initial paired sample. We set a sampling of maximum 3000 instances (i.e 600 instances per range). Given the fact that the high similarity range had less than 600 instances, we collected a total of 2639 potential text-image candidate pairs for manual annotation. Figure FIGREF8 shows the proposed methodology can sample approximately a uniform distribution with the exception of the higher similarity values (left and middle plots). In addition, we show that the lower predicted similarities are mainly coming from random sampling, whereas, as expected, the higher ones come from similar images. The Visual STS Dataset ::: Data Collection ::: 2. Manual annotations. In order to annotate the sample of 2639 pairs, we used Amazon Mechanical Turk (AMT). Crowdworkers followed the same instructions of previous STS annotation campaigns BIBREF6, very similar to those in Table TABREF4. Annotators needed to focus on textual similarity with the aid of aligned images. We got up to 5 scores per item, and we discarded annotators that showed low correlation with the rest of the annotators ($\rho < 0.75$). In total 56 annotators took part. On average each crowdworker annotated 220 pairs, where the amounts ranged from 19 to 940 annotations. Regardless the annotation amounts, most of the annotators showed high correlations with the rest of the participants. We computed the annotation correlation by aggregating the individual Pearson correlation with averaged similarity of the other annotators. The annotation shows high correlation among the crowdworkers ($\rho = 0.89$ $\pm 0.01$) comparable to that of text-only STS datasets. Table TABREF10 shows the average item similarity and item disagreement in the annotation. We defined item disagreement as the standard deviation of the annotated similarity value. The low average similarity can be explained by the high number of zero-similarity pairs. Item disagreement is moderately low (about 0.6 points out of 5) which is in accordance with the high correlation between the annotators. The Visual STS Dataset ::: Data Collection ::: 3. Selection of difficult examples. In preliminary experiments, the evaluation of two baseline models, word overlap and the ensemble system mentioned before, showed that the sampling strategy introduced a large number of trivial examples. For example, the word overlap system attained $0.83$ $\rho $. This high correlation could be the result of using word-overlap in the first sampling round. In order to create a more challenging dataset where to measure the effectiveness of multimodal representations, we defined the easiness metric to filter out some of the easy examples from the annotated dataset. We defined easiness as an amount of discrepancy provided by an example regarding the whole dataset. Taking the inner product of the Pearson correlation formula as basis, we measure the easiness of an annotated example $i$ as follows: where $o_{i}$ is the word-overlap similarity of the $i$-th pair, $\overline{o}$ is the mean overlap similarity in the dataset, and $s_{o}$ is the standard deviation. Similarly, variable $gs_{i}$ is the gold-standard value of the $i$-th pair, and $\overline{gs}$ and $s_{gs}$ are the mean and standard deviation of gold values in the dataset, respectively. We removed 30% of the easiest examples and create a more challenging dataset of 1858 pairs, reducing $\rho $ to $0.57$ for the word-overlap model, and to $0.66$ $\rho $ (from $0.85$) for the ML based approach. The Visual STS Dataset ::: Dataset Description The full dataset comprises both the sample mentioned above and the 819 pairs from our preliminary work, totalling 2677 pairs. Figure FIGREF14 shows the final item similarity distribution. Although the distribution is skewed towards lower similarity values, we consider that all the similarity ranges are sufficiently well covered. Average similarity of the dataset is $1.9$ with a standard deviation of $1.36$ points. The dataset contains 335 zero-valued pairs out of the 2677 instances, which somehow explains the lower average similarity. Evaluation of Representation Models The goal of the evaluation is to explore whether representation models can have access to images, instead of text alone, have better inference abilities. We consider the following models. ResNet BIBREF13 is a deep network of 152 layers in which the residual representation functions are learned instead of learning the signal representation directly. The model is trained over 1.2 million images of ImageNet, the ILSRVC subset of 1000 image categories. We use the top layer of a pretrained ResNet-152 model to represent the images associated to text. Each image is represented with a vector of 2048 dimensions. GloVe. The Global Vector model BIBREF8 is a log-linear model trained to encode semantic relationships between words as vector offsets in the learned vector space, combining global matrix factorization and local context window methods. Since GloVe is a word-level vector model, we build sentence representations with the mean of the vectors of the words composing the sentence. The pre-trained model from GloVe considered in this paper is the 6B-300d, with a vocabulary of 400k words, 300 dimension vectors and trained on a dataset of 6 billion tokens. BERT. The Bidirectional Encoder Representations from Transformer BIBREF9 implements a novel methodology based on the so-called masked language model, which randomly masks some of the tokens from the input, and predicts the original vocabulary id of the masked word based only on its context. The BERT model used in our experiments is the BERT-Large Uncased (24-layer, 1024-hidden, 16-heads, 340M parameters). In order to obtain the sentence-level representation we extract the token embeddings of the last layer and compute the mean vector, yielding a vector of 1024 dimensions. GPT-2. The Generative Pre-Training-2 modelBIBREF10 is a language model based on the transformer architecture, which is trained on the task of predicting the next word, given all the previous words occurring in some text. In the same manner to BERT and GloVe, we extract the token embeddings of the last layer and compute the mean vector to obtain the sentence-level representation of 768 dimensions. The GPT-2 model used in our experiments was trained on a very large corpus of about 40 GB of text data with 1.5 billion parameters. USE. The Universal Sentence Encoder BIBREF11 is a model for encoding sentences into embedding vectors, specifically designed for transfer learning in NLP. Based on a deep averaging network encoder, the model is trained for varying text lengths, such as sentences, phrases or short textbfs, and in a variety of semantic tasks including STS. The encoder returns the vector of the sentence with 512 dimensions. VSE++. The Visual-Semantic Embedding BIBREF12 is a model trained for image-caption retrieval. The model learns a joint space of aligned images and captions. The model is an improvement of the original introduced by BIBREF37, and combines a ResNet-152 over images with a bidirectional Recurrent Neural Network (GRU) over the sentences. Texts and images are projected onto the joint space, obtaining representations of 1024 dimension both for images and texts. We used projections of images and texts in our experiments. The VSE++ model used in our experiments was pre-trained on the Microsoft COCO dataset BIBREF28 and the Flickr30K dataset BIBREF36. Table TABREF15 summarizes the sentence and image representations used in the evaluation. Evaluation of Representation Models ::: Experiments ::: Experimental Setting. We split the vSTS dataset into training, validation and test partitions sampling at random and preserving the overall score distributions. In total, we use 1338 pairs for training, 669 for validation, and the rest of the 670 pairs were used for the final testing. Similar to the STS task, we use the Pearson correlation coefficient ($\rho $) as the evaluation metric of the task. Evaluation of Representation Models ::: Experiments ::: STS models. Our goal is to keep similarity models as simple as possible in order to directly evaluate textual and visual representations and avoid as much as possible the influence of the parameters that intertwine when learning a particular task. We defined two scenarios: the supervised and the unsupervised scenarios. In the supervised scenario we train a Siamese Regression model in a similar way presented in BIBREF38. Given a sentence/image pair, we wish to predict a real-valued similarity in some range $[1,K]$, being $K=5$ in our experiments. We first produce sentence/image representations $h_{L}$ and $h_{R}$ for each sentence in the pair using any of the unimodal models described above, or using a multimodal representations as explained below. Given these representations, we predict the similarity score $o$ using a regression model that takes both the distance and angle between the pair ($h_{L}$, $h_{R}$): Note that the distance and angle concatenation ($[h_{x}, h_{+}]$) yields a $2 * d$-dimensional vector. The resulting vector is used as input for the non-linear hidden layer ($h_{s}$) of the model. Contrary to BIBREF38, we empirically found that the estimation of a continuous value worked better than learning a softmax distribution over $[1,K]$ integer values. The loss function of our model is the Mean Square Error (MSE), which is the most commonly used regression loss function. In the unsupervised scenario similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations. Evaluation of Representation Models ::: Experiments ::: Multimodal representation. We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project). The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function. Evaluation of Representation Models ::: Experiments ::: Hyperparameters and training details. We use the validation set to learn parameters of the supervised models, and to carry an exploration of the hyperparameters. We train each model a maximum of 300 epochs and apply early-stopping strategy with a patience of 25 epochs. For early stopping we monitor MSE loss value on validation. For the rest, we run a grid search for selecting the rest of the hyperparameter values. We explore learning rate values (0.0001, 0.001, 0.01, 0.05), L2 regularization weights (0.0, 0.0001, 0.001, 0.01), and different hidden layer ($h_{s}$) dimensions (50,100, 200, 300). In addition, we activate and deactivate batch normalization in each layer for each of the hyperparameter selection. Evaluation of Representation Models ::: Results ::: The unsupervised scenario. Table TABREF26 reports the results using the item representations directly. We report results over train and dev partitions for completeness, but note that none of them was used to tune the models. As it can be seen, multimodal representations consistently outperform their text-only counterparts. This confirms that, overall, visual information is helpful in the semantic textual similarity task and that image and sentence representation are complementary. For example, the bert model improves more than 13 points when visual information provided by the resnet is concatenated. glove shows a similar or even larger improvement, with similar trends for use and vse++(text). Although vse++(img) shows better performance than resnet when applying them alone, further experimentation showed lower complementarity when combining with textual representation (e.g. $0.807\rho $ in test combining textual and visual modalities of vse++). This is something expected as vse++(img) is pre-trained along with the textual part of the vse++ model on the same task. We do not show the combinations with vse++(img) due to the lack of space. Interestingly, results show that images alone are valid to predict caption similarity ($0.627 \rho $ in test). Actually, in this experimental setting resnet is on par with bert, which is the best purely unsupervised text-only model. Surprisingly, gpt-2 representations are not useful for text similarity tasks. This might be because language models tend to forget past context as they focus on predicting the next token BIBREF39. Due to the low results of gpt-2 we decided not to combine it with resnet. Evaluation of Representation Models ::: Results ::: The supervised scenario. Table TABREF29 show a similar pattern to that in the the unsupervised setting. Overall, models that use a conjunction of multimodal features significantly outperform unimodal models, and this confirms, in a more competitive scenario, that adding visual information helps learning easier the STS task. The gain of multimodal models is considerable compared to the text-only models. The most significant gain is obtained when glove features are combined with resnet. The model improves more than $15.0$ points. In this case, the improvement over bert is lower, but still considerable with more than $4.0$ points. In the same vein as in the unsupervised scenario, features obtained with a resnet can be as competitive as some text based models (e.g. BERT). gpt-2, as in the unsupervised scenario, does not produce useful representations for semantic similarity tasks. Surprisingly, the regression model with gpt-2 features is not able to learn anything in the training set. As we did in the previous scenario, we do not keep combining gpt-2 with visual features. Multimodal version of vse++ and use are the best model among the supervised approaches. Textual version of use and vse++ alone obtain very competitive results and outperforms some of the multimodal models (the concatenate version of glove and bert with resnet). Results might indicate that text-only with sufficient training data can be on par with multimodal models, but, still, when there is data scarcity, multimodal models can perform better as they have more information over the same data point. Comparison between projected and concatenated models show that projected models attain slightly better results in two cases, but the best overall results are obtained when concatenating vse++(text) with resnet. Although concatenation proofs to be a hard baseline, we expect that more sophisticated combination methods like grounding BIBREF40 will obtain larger gains in the future. Discussion ::: Contribution of the Visual Content Table TABREF31 summarizes the contribution of the images on text representations in test partition. The contribution is consistent through all text-based representations. We measure the absolute difference (Diff) and the error reduction (E.R) of each textual representation with the multimodal counterpart. For the comparison we chose the best text model for each representation. As expected we obtain the largest improvement ($22-26\%$ E.R) when text-based unsupervised models are combined with image representations. Note that unsupervised models are not learning anything about the specific task, so the more information in the representation, the better. In the case of use and vse++ the improvement is significant but not as large as the purely unsupervised models. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. Improvement is consistent for the supervised models. Contrary to the unsupervised setting, these models are designed to learn about the task, so there is usually less room for the improvement. Still, glove+resnet shows an error reduction of $12.9$ in the test set. Finally, use and vse++ show smaller improvements when we add visual information into the model. Figure FIGREF32 displays some examples where visual information positively contributes predicting accurately similarity values. Examples show the case where related descriptions are lexicalized in a different way so a text-only model (glove) predicts low similarity between captions (top two examples). Instead, the multimodal representation glove+resnet does have access to the image and can predict more accurately the similarity value of the two captions. The examples in the bottom show the opposite case, where similar set of words are used to describe very different situations. The text based model overestimates the similarity of captions, while the multimodal model corrects the output by looking at the differences of the images. On the contrary, Figure FIGREF33 shows that images can also be misleading, and that the task is not as trivial as combining global representations of the image. In this case, related but different captions are supported by very similar images, and as a consequence, the multimodal model overestimates their similarity, while the text-only model focuses on the most discriminating piece of information in the text. Discussion ::: The effect of hyperparameters Neural models are sensitive to hyperparameters, and we might think that results on the supervised scenario are due to hyperparameter optimization. Figure FIGREF35 displays the variability of $\rho $ in development across all hyperparameters. Due to space constraints we show text-only and multimodal concatenated models. Models are ordered by mean performance. As we can see, combined models show better mean performance, and all models except Glove exhibit tight variability. Conclusions and Future Work The long term goal of our research is to devise multimodal representation techniques that improve current inference capabilities. We have presented a novel task, Visual Semantic Textual Similarity (vSTS), where the inference capabilities of visual, textual, and multimodal representations can be tested directly. The dataset has been manually annotated by crowdsourcers with high inter-annotator correlation ($\rho = 0.89$). We tested several well-known textual and visual representations, which we combined using concatenation and projection. Our results show, for the first time, that the addition of image representations allows better inference. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine- tuned in a text-only inference task like USE. The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. In the future, we would like to ground the text representations to image regions BIBREF40, which could avoid misleading predictions due to the global representation of the image. Finally, we would like to extend the dataset with more examples, as we acknowledge that training set is limited to train larger models. This research was partially funded by the Basque Government excellence research group (IT1343-19), the NVIDIA GPU grant program, the Spanish MINECO (DeepReading RTI2018-096846-B-C21 (MCIU/AEI/FEDER, UE)) and project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018). Ander enjoys a PhD grant from the Basque Government.
The second method it to learn a common space for the two modalities before concatenation (project), The first method is concatenation of the text and image representation (concat)
4d8b3928f89d73895a7655850a227fbac08cdae9
4d8b3928f89d73895a7655850a227fbac08cdae9_0
Q: How much better is inference that has addition of image representation compared to text-only representations? Text: Introduction Language understanding is a task proving difficult to automatize, because, among other factors, much of the information that is needed for the correct interpretation of an utterance is not explicit in text BIBREF0. This contrasts with how natural is language understanding for humans, who can cope easily with information absent in text, using common sense and background knowledge like, for instance, typical spatial relations between objects. From another perspective, it is well-known that the visual modality provides complementary information to that in the text. In fact, recent advances in deep learning research have led the field of computer vision and natural language processing to significant progress in tasks that involve visual and textual understanding. Tasks that include visual and textual content include Image Captioning BIBREF1, Visual Question Answering BIBREF2, and Visual Machine Translation BIBREF3, among others. On the other hand, progress in language understanding has been driven by datasets which measure the quality of sentence representations, specially those where inference tasks are performed on top of sentence representations, including textual entailment BIBREF4, BIBREF5 and semantic textual similarity (STS). In STS BIBREF6, for instance, pairs of sentences have been annotated with similarity scores, with top scores for semantically equivalent sentences and bottom scores for completely unrelated sentences. STS provides a unified framework for extrinsic evaluation of multiple semantic aspects such as compositionality and phrase similarity. Contrary to related tasks, such as textual entailment and paraphrase detection, STS incorporates the notion of graded semantic similarity between the pair of textual sentences and is symmetric. In this paper we extend STS to the visual modality, and present Visual Semantic Textual Similarity (vSTS), a task and dataset which allows to study whether better sentence representations can be built when having access to the corresponding images, in contrast with having access to the text alone. Similar to STS, annotators were asked to score the similarity between two items, but in this case each item comprises an image and a textual caption. Systems need to predict the human score. Figure FIGREF1 shows an instance in the dataset, with similarity scores in the captions. The example illustrates the need to re-score the similarity values, as the text-only similarity is not applicable to the multimodal version of the dataset: the annotators return a low similarity when using only text, while, when having access to the corresponding image, they return a high similarity. Although a dataset for multimodal inference exists (visual textual entailment BIBREF7) that dataset reused the text-only inference labels. The vSTS dataset aims to become a standard benchmark to test the contribution of visual information when evaluating the similarity of sentences and the quality of multimodal representations, allowing to test the complementarity of visual and textual information for improved language understanding. Although multimodal tasks such as image captioning, visual question answering and visual machine translation already show that the combination of both modalities can be effectively used, those tasks do not separately benchmark the inference capabilities of multimodal visual and textual representations. We evaluate a variety of well-known textual, visual and multimodal representations in supervised and unsupervised scenarios, and systematically explore if visual content is useful for sentence similarity. For text, we studied pre-trained word embeddings such as GloVe BIBREF8, pre-trained language models like GPT-2 and BERT BIBREF9, BIBREF10, sentence representations fine-tuned on an entailment task like USE BIBREF11, and textual representations pre-trained on a multimodal caption retrieval task like VSE++ BIBREF12. For image representation we use a model pre-trained on Imagenet (ResNet BIBREF13). In order to combine visual and textual representations we used concatenation and learn simple projections. Our experiments show that the text-only models are outperformed by their multimodal counterparts when adding visual representations, with up to 24% error reduction. Our contributions are the following: (1) We present a dataset which allows to evaluate visual/textual representations on an inference task. The dataset is publicly available under a free license. (2) Our results show, for the first time, that the addition of image representations allows better inference. (3) The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. (4) The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. At the same time the improvement holds for all textual representations, even those fine-tuned on a similarity task. Related Work The task of Visual Semantic Textual Similarity stems from previous work on textual inference tasks. In textual entailment, given a textual premise and a textual hypothesis, systems need to decide whether the first entails the second, they are in contradiction, or none of the previous BIBREF4. Popular datasets include the Stanford Natural Language Inference dataset BIBREF5. As an alternative to entailment, STS datasets comprise pairs of sentences which have been annotated with similarity scores. STS systems are usually evaluated on the STS benchmark dataset BIBREF6. In this paper we present an extension of STS, so we present the task in more detail in the next section. Textual entailment has been recently extended with visual information. A dataset for visual textual entailment was presented in BIBREF7. Even if the task is different from the text-only counterpart, they reused the text-only inference ground-truth labels without re-annotating them. In fact, they annotate a small sample to show that the labels change. In addition, their dataset tested pairs of text snippets referring to a single image, and it was only useful for testing grounding techniques, but not to measure the complementarity of visual and textual representations. The reported results did not show that grounding improves results, while our study shows that the inference capabilities of multimodal visual and textual representations improve over text-only representations. In related work, BIBREF14 propose visual entailment, where the premise is an image and the hypothesis is textual. The chosen setting does not allow to test the contribution of multimodal representationn with respect to unimodal ones. The complementarity of visual and text representations for improved language understanding was first proven on word representations, where word embeddings were combined with visual or perceptual input to produce multimodal representations BIBREF15. The task of Visual Semantic Textual Similarity is also related to other multimodal tasks such as Image Captioning BIBREF16, BIBREF17, Text-Image Retrieval BIBREF18, BIBREF19 and Visual Question Answering BIBREF2. Image Captioning is a task that aims to generate a description of a given image. The task is related to ours in that it is required an understanding of the scene depicted in the image, so the system can generate an accurate description of it. Unlike vSTS, image captioning is a generation task in which evaluation is challenging and unclear, as the defined automatic metrics are somewhat problematic BIBREF20. On the other hand, Text-Image Retrieval task requires to find similarities and differences of the items in two modalities, so we can distinguish relevant and irrelevant texts and images regarding the query. Apart from not checking inference explicitly, the other main difference with regards to vSTS is that, in retrieval, items are ranked from most to least similar, whereas the vSTS task consists on scoring an accurate real valued similarity. A comprehensive overview is out of the scope, and thus we focus on the most related vision and language tasks. We refer the reader to BIBREF21 for a survey on vision and language research. Many of these tasks can be considered as extensions of previously existing NLP taks. For instance, Image Captioning can be seen as an extension of conditional language modeling BIBREF22 or natural language generation BIBREF23, whereas Visual Question Answering is a natural counterpart of the traditional Question Answering in NLP. Regarding multimodal and unimodal representation learning, convolutional neural networks (CNN) have become the standard architecture for generating representations for images BIBREF24. Most of these models learn transferable general image features in tasks such as image classification, and detection, semantic segmentation, and action recognition. Most used transferable global image representations are learned with deep CNN architectures such as AlexNet BIBREF25, VGG BIBREF26, Inception-v3 BIBREF27, and ResNet BIBREF13 using large datasets such as ImageNet BIBREF1, MSCOCO BIBREF28 and Visual Genome BIBREF29. Recently, Graph Convolution Networks (GCN) showed to be promising way to distill multiple input types multimodal representations BIBREF30. Language representation is mostly done with pretrained word embeddings like Glove BIBREF8 and sequence learning techniques such as Recurrent Neural Networks (RNN) BIBREF31. Recently, self-attention approaches like Transformers BIBREF32 provided transferable models (BERT, GPT-2, among others BIBREF9, BIBREF10) that significantly improve many state-of-the-art tasks in NLP. Alternatively, sentence representations have been fine-tuned on an entailment task BIBREF11. We will present those used in our work in more detail below. The Visual STS Dataset STS assesses the degree to which two sentences are semantically equivalent to each other. The annotators measure the similarity among sentences, with higher scores for more similar sentences. The annotations of similarity were guided by the scale in Table TABREF4, ranging from 0 for no meaning overlap to 5 for meaning equivalence. Intermediate values reflect interpretable levels of partial overlap in meaning. In this work, we extend the STS task with images, providing visual information that models use, and assess how much visual content can contribute in a language understanding task. The input of the task now consists of two items, each comprising an image and its corresponding caption. In the same way as in STS, systems need to score the similarity of the sentences with the help of the images. Figure FIGREF1 shows an example of an instance in the dataset. In previous work reported in a non-archival workshop paper BIBREF33, we presented a preliminary dataset which used the text-only ground-truth similarity scores. The 819 pairs were extracted from a subset of the STS benchmark, more specifically, the so called STS-images subset, which contains pairs of captions with access to images from PASCAL VOC-2008 BIBREF34 and Flickr-8K BIBREF35. Our manual analysis, including examples like Figure FIGREF1, showed that in many cases the text-only ground truth was not valid, so we decided to re-annotated the dataset but showing the images in addition to the captions (the methodology is identical to the AMT annotation method mentioned below). The correlation of the new annotations with regard to the old ones was high (0.9$\rho $) showing that the change in scores was not drastic, but that annotations did differ. The annotators tended to return higher similarity scores, as the mean similarity score across the dataset increased from 1.7 to 2.1. The inter-tagger correlation was comparable to the text-only task, showing that the new annotation task was well-defined. From another perspective, the fact that we could only extract 819 pairs from existing STS datasets showed the need to sample new pairs from other image-caption datasets. In order to be effective in measuring the quality of multimodal representations, we defined the following desiderata for the new dataset: (1) Following STS datasets, the similarity values need to be balanced, showing a uniform distribution; (2) Paired images have to be different to avoid making the task trivial, as hand analysis of image-caption datasets showed that two captions of the same image tended to be paraphrases of each other; (3) The images should not be present in more than one instance, to avoid biases in the visual side; (4) It has to contain a wide variety of images so we can draw stronger conclusions. The preliminary dataset fulfilled 2 and 3, but the dataset was skewed towards low similarity values and the variety was limited. The Visual STS Dataset ::: Data Collection The data collection of sentence-image pairs comprised several steps, including the selection of pairs to be annotated, the annotation methodology, and a final filtering stage. The Visual STS Dataset ::: Data Collection ::: 1. Sampling data for manual annotation. We make use of two well-known image-caption datasets. On one hand, Flickr30K dataset BIBREF36 that has about 30K images with 5 manually generated captions per image. On the other hand, we use the Microsoft COCO dataset BIBREF28, which contains more than 120K images and 5 captions per image. Using both sources we hope to cover a wide variety of images. In order to select pairs of instances, we did two sampling rounds. The goal of the first run is to gather a large number of varied image pairs with their captions which contain interesting pairs. We started by sampling images. We then combined two ways of sampling pairs of images. In the first, we generated pairs by sampling the images randomly. This way, we ensure higher variety of paired scenes, but presumably two captions paired at random will tend to have very low similarity. In the second, we paired images taking into account their visual similarity, ensuring the selection of related scenes with a higher similarity rate. We used the cosine distance of the top-layer of a pretrained ResNet-50 BIBREF13 to compute the similarity of images. We collected an equal number of pairs for the random and visual similarity strategy, gathering, in total, $155,068$ pairs. As each image has 5 captions, we had to select one caption for each image, and we decided to select the two captions with highest word overlap. This way, we get more balanced samples in terms of caption similarity. The initial sampling created thousands of pairs that were skewed towards very low similarity values. Given that manual annotation is a costly process, and with the goal of having a balanced dataset, we used an automatic similarity system to score all the pairs. This text-only similarity system is an ensemble of feature-based machine learning systems that uses a large variety of distance and machine-translation based features. The model was evaluated on a subset of STS benchmark dataset BIBREF6 and compared favorably to other baseline models. As this model is very different from current deep learning techniques, it should not bias the dataset sampling in a way which influences current similarity systems. The automatic scores were used to sample the final set of pairs as follows. We defined five similarity ranges ($(0, 1], \ldots ,(4, 5]$) and randomly selected the same amount of pairs from the initial paired sample. We set a sampling of maximum 3000 instances (i.e 600 instances per range). Given the fact that the high similarity range had less than 600 instances, we collected a total of 2639 potential text-image candidate pairs for manual annotation. Figure FIGREF8 shows the proposed methodology can sample approximately a uniform distribution with the exception of the higher similarity values (left and middle plots). In addition, we show that the lower predicted similarities are mainly coming from random sampling, whereas, as expected, the higher ones come from similar images. The Visual STS Dataset ::: Data Collection ::: 2. Manual annotations. In order to annotate the sample of 2639 pairs, we used Amazon Mechanical Turk (AMT). Crowdworkers followed the same instructions of previous STS annotation campaigns BIBREF6, very similar to those in Table TABREF4. Annotators needed to focus on textual similarity with the aid of aligned images. We got up to 5 scores per item, and we discarded annotators that showed low correlation with the rest of the annotators ($\rho < 0.75$). In total 56 annotators took part. On average each crowdworker annotated 220 pairs, where the amounts ranged from 19 to 940 annotations. Regardless the annotation amounts, most of the annotators showed high correlations with the rest of the participants. We computed the annotation correlation by aggregating the individual Pearson correlation with averaged similarity of the other annotators. The annotation shows high correlation among the crowdworkers ($\rho = 0.89$ $\pm 0.01$) comparable to that of text-only STS datasets. Table TABREF10 shows the average item similarity and item disagreement in the annotation. We defined item disagreement as the standard deviation of the annotated similarity value. The low average similarity can be explained by the high number of zero-similarity pairs. Item disagreement is moderately low (about 0.6 points out of 5) which is in accordance with the high correlation between the annotators. The Visual STS Dataset ::: Data Collection ::: 3. Selection of difficult examples. In preliminary experiments, the evaluation of two baseline models, word overlap and the ensemble system mentioned before, showed that the sampling strategy introduced a large number of trivial examples. For example, the word overlap system attained $0.83$ $\rho $. This high correlation could be the result of using word-overlap in the first sampling round. In order to create a more challenging dataset where to measure the effectiveness of multimodal representations, we defined the easiness metric to filter out some of the easy examples from the annotated dataset. We defined easiness as an amount of discrepancy provided by an example regarding the whole dataset. Taking the inner product of the Pearson correlation formula as basis, we measure the easiness of an annotated example $i$ as follows: where $o_{i}$ is the word-overlap similarity of the $i$-th pair, $\overline{o}$ is the mean overlap similarity in the dataset, and $s_{o}$ is the standard deviation. Similarly, variable $gs_{i}$ is the gold-standard value of the $i$-th pair, and $\overline{gs}$ and $s_{gs}$ are the mean and standard deviation of gold values in the dataset, respectively. We removed 30% of the easiest examples and create a more challenging dataset of 1858 pairs, reducing $\rho $ to $0.57$ for the word-overlap model, and to $0.66$ $\rho $ (from $0.85$) for the ML based approach. The Visual STS Dataset ::: Dataset Description The full dataset comprises both the sample mentioned above and the 819 pairs from our preliminary work, totalling 2677 pairs. Figure FIGREF14 shows the final item similarity distribution. Although the distribution is skewed towards lower similarity values, we consider that all the similarity ranges are sufficiently well covered. Average similarity of the dataset is $1.9$ with a standard deviation of $1.36$ points. The dataset contains 335 zero-valued pairs out of the 2677 instances, which somehow explains the lower average similarity. Evaluation of Representation Models The goal of the evaluation is to explore whether representation models can have access to images, instead of text alone, have better inference abilities. We consider the following models. ResNet BIBREF13 is a deep network of 152 layers in which the residual representation functions are learned instead of learning the signal representation directly. The model is trained over 1.2 million images of ImageNet, the ILSRVC subset of 1000 image categories. We use the top layer of a pretrained ResNet-152 model to represent the images associated to text. Each image is represented with a vector of 2048 dimensions. GloVe. The Global Vector model BIBREF8 is a log-linear model trained to encode semantic relationships between words as vector offsets in the learned vector space, combining global matrix factorization and local context window methods. Since GloVe is a word-level vector model, we build sentence representations with the mean of the vectors of the words composing the sentence. The pre-trained model from GloVe considered in this paper is the 6B-300d, with a vocabulary of 400k words, 300 dimension vectors and trained on a dataset of 6 billion tokens. BERT. The Bidirectional Encoder Representations from Transformer BIBREF9 implements a novel methodology based on the so-called masked language model, which randomly masks some of the tokens from the input, and predicts the original vocabulary id of the masked word based only on its context. The BERT model used in our experiments is the BERT-Large Uncased (24-layer, 1024-hidden, 16-heads, 340M parameters). In order to obtain the sentence-level representation we extract the token embeddings of the last layer and compute the mean vector, yielding a vector of 1024 dimensions. GPT-2. The Generative Pre-Training-2 modelBIBREF10 is a language model based on the transformer architecture, which is trained on the task of predicting the next word, given all the previous words occurring in some text. In the same manner to BERT and GloVe, we extract the token embeddings of the last layer and compute the mean vector to obtain the sentence-level representation of 768 dimensions. The GPT-2 model used in our experiments was trained on a very large corpus of about 40 GB of text data with 1.5 billion parameters. USE. The Universal Sentence Encoder BIBREF11 is a model for encoding sentences into embedding vectors, specifically designed for transfer learning in NLP. Based on a deep averaging network encoder, the model is trained for varying text lengths, such as sentences, phrases or short textbfs, and in a variety of semantic tasks including STS. The encoder returns the vector of the sentence with 512 dimensions. VSE++. The Visual-Semantic Embedding BIBREF12 is a model trained for image-caption retrieval. The model learns a joint space of aligned images and captions. The model is an improvement of the original introduced by BIBREF37, and combines a ResNet-152 over images with a bidirectional Recurrent Neural Network (GRU) over the sentences. Texts and images are projected onto the joint space, obtaining representations of 1024 dimension both for images and texts. We used projections of images and texts in our experiments. The VSE++ model used in our experiments was pre-trained on the Microsoft COCO dataset BIBREF28 and the Flickr30K dataset BIBREF36. Table TABREF15 summarizes the sentence and image representations used in the evaluation. Evaluation of Representation Models ::: Experiments ::: Experimental Setting. We split the vSTS dataset into training, validation and test partitions sampling at random and preserving the overall score distributions. In total, we use 1338 pairs for training, 669 for validation, and the rest of the 670 pairs were used for the final testing. Similar to the STS task, we use the Pearson correlation coefficient ($\rho $) as the evaluation metric of the task. Evaluation of Representation Models ::: Experiments ::: STS models. Our goal is to keep similarity models as simple as possible in order to directly evaluate textual and visual representations and avoid as much as possible the influence of the parameters that intertwine when learning a particular task. We defined two scenarios: the supervised and the unsupervised scenarios. In the supervised scenario we train a Siamese Regression model in a similar way presented in BIBREF38. Given a sentence/image pair, we wish to predict a real-valued similarity in some range $[1,K]$, being $K=5$ in our experiments. We first produce sentence/image representations $h_{L}$ and $h_{R}$ for each sentence in the pair using any of the unimodal models described above, or using a multimodal representations as explained below. Given these representations, we predict the similarity score $o$ using a regression model that takes both the distance and angle between the pair ($h_{L}$, $h_{R}$): Note that the distance and angle concatenation ($[h_{x}, h_{+}]$) yields a $2 * d$-dimensional vector. The resulting vector is used as input for the non-linear hidden layer ($h_{s}$) of the model. Contrary to BIBREF38, we empirically found that the estimation of a continuous value worked better than learning a softmax distribution over $[1,K]$ integer values. The loss function of our model is the Mean Square Error (MSE), which is the most commonly used regression loss function. In the unsupervised scenario similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations. Evaluation of Representation Models ::: Experiments ::: Multimodal representation. We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project). The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function. Evaluation of Representation Models ::: Experiments ::: Hyperparameters and training details. We use the validation set to learn parameters of the supervised models, and to carry an exploration of the hyperparameters. We train each model a maximum of 300 epochs and apply early-stopping strategy with a patience of 25 epochs. For early stopping we monitor MSE loss value on validation. For the rest, we run a grid search for selecting the rest of the hyperparameter values. We explore learning rate values (0.0001, 0.001, 0.01, 0.05), L2 regularization weights (0.0, 0.0001, 0.001, 0.01), and different hidden layer ($h_{s}$) dimensions (50,100, 200, 300). In addition, we activate and deactivate batch normalization in each layer for each of the hyperparameter selection. Evaluation of Representation Models ::: Results ::: The unsupervised scenario. Table TABREF26 reports the results using the item representations directly. We report results over train and dev partitions for completeness, but note that none of them was used to tune the models. As it can be seen, multimodal representations consistently outperform their text-only counterparts. This confirms that, overall, visual information is helpful in the semantic textual similarity task and that image and sentence representation are complementary. For example, the bert model improves more than 13 points when visual information provided by the resnet is concatenated. glove shows a similar or even larger improvement, with similar trends for use and vse++(text). Although vse++(img) shows better performance than resnet when applying them alone, further experimentation showed lower complementarity when combining with textual representation (e.g. $0.807\rho $ in test combining textual and visual modalities of vse++). This is something expected as vse++(img) is pre-trained along with the textual part of the vse++ model on the same task. We do not show the combinations with vse++(img) due to the lack of space. Interestingly, results show that images alone are valid to predict caption similarity ($0.627 \rho $ in test). Actually, in this experimental setting resnet is on par with bert, which is the best purely unsupervised text-only model. Surprisingly, gpt-2 representations are not useful for text similarity tasks. This might be because language models tend to forget past context as they focus on predicting the next token BIBREF39. Due to the low results of gpt-2 we decided not to combine it with resnet. Evaluation of Representation Models ::: Results ::: The supervised scenario. Table TABREF29 show a similar pattern to that in the the unsupervised setting. Overall, models that use a conjunction of multimodal features significantly outperform unimodal models, and this confirms, in a more competitive scenario, that adding visual information helps learning easier the STS task. The gain of multimodal models is considerable compared to the text-only models. The most significant gain is obtained when glove features are combined with resnet. The model improves more than $15.0$ points. In this case, the improvement over bert is lower, but still considerable with more than $4.0$ points. In the same vein as in the unsupervised scenario, features obtained with a resnet can be as competitive as some text based models (e.g. BERT). gpt-2, as in the unsupervised scenario, does not produce useful representations for semantic similarity tasks. Surprisingly, the regression model with gpt-2 features is not able to learn anything in the training set. As we did in the previous scenario, we do not keep combining gpt-2 with visual features. Multimodal version of vse++ and use are the best model among the supervised approaches. Textual version of use and vse++ alone obtain very competitive results and outperforms some of the multimodal models (the concatenate version of glove and bert with resnet). Results might indicate that text-only with sufficient training data can be on par with multimodal models, but, still, when there is data scarcity, multimodal models can perform better as they have more information over the same data point. Comparison between projected and concatenated models show that projected models attain slightly better results in two cases, but the best overall results are obtained when concatenating vse++(text) with resnet. Although concatenation proofs to be a hard baseline, we expect that more sophisticated combination methods like grounding BIBREF40 will obtain larger gains in the future. Discussion ::: Contribution of the Visual Content Table TABREF31 summarizes the contribution of the images on text representations in test partition. The contribution is consistent through all text-based representations. We measure the absolute difference (Diff) and the error reduction (E.R) of each textual representation with the multimodal counterpart. For the comparison we chose the best text model for each representation. As expected we obtain the largest improvement ($22-26\%$ E.R) when text-based unsupervised models are combined with image representations. Note that unsupervised models are not learning anything about the specific task, so the more information in the representation, the better. In the case of use and vse++ the improvement is significant but not as large as the purely unsupervised models. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. Improvement is consistent for the supervised models. Contrary to the unsupervised setting, these models are designed to learn about the task, so there is usually less room for the improvement. Still, glove+resnet shows an error reduction of $12.9$ in the test set. Finally, use and vse++ show smaller improvements when we add visual information into the model. Figure FIGREF32 displays some examples where visual information positively contributes predicting accurately similarity values. Examples show the case where related descriptions are lexicalized in a different way so a text-only model (glove) predicts low similarity between captions (top two examples). Instead, the multimodal representation glove+resnet does have access to the image and can predict more accurately the similarity value of the two captions. The examples in the bottom show the opposite case, where similar set of words are used to describe very different situations. The text based model overestimates the similarity of captions, while the multimodal model corrects the output by looking at the differences of the images. On the contrary, Figure FIGREF33 shows that images can also be misleading, and that the task is not as trivial as combining global representations of the image. In this case, related but different captions are supported by very similar images, and as a consequence, the multimodal model overestimates their similarity, while the text-only model focuses on the most discriminating piece of information in the text. Discussion ::: The effect of hyperparameters Neural models are sensitive to hyperparameters, and we might think that results on the supervised scenario are due to hyperparameter optimization. Figure FIGREF35 displays the variability of $\rho $ in development across all hyperparameters. Due to space constraints we show text-only and multimodal concatenated models. Models are ordered by mean performance. As we can see, combined models show better mean performance, and all models except Glove exhibit tight variability. Conclusions and Future Work The long term goal of our research is to devise multimodal representation techniques that improve current inference capabilities. We have presented a novel task, Visual Semantic Textual Similarity (vSTS), where the inference capabilities of visual, textual, and multimodal representations can be tested directly. The dataset has been manually annotated by crowdsourcers with high inter-annotator correlation ($\rho = 0.89$). We tested several well-known textual and visual representations, which we combined using concatenation and projection. Our results show, for the first time, that the addition of image representations allows better inference. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine- tuned in a text-only inference task like USE. The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. In the future, we would like to ground the text representations to image regions BIBREF40, which could avoid misleading predictions due to the global representation of the image. Finally, we would like to extend the dataset with more examples, as we acknowledge that training set is limited to train larger models. This research was partially funded by the Basque Government excellence research group (IT1343-19), the NVIDIA GPU grant program, the Spanish MINECO (DeepReading RTI2018-096846-B-C21 (MCIU/AEI/FEDER, UE)) and project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018). Ander enjoys a PhD grant from the Basque Government.
largest improvement ($22-26\%$ E.R) when text-based unsupervised models are combined with image representations
b6e4b98fad3681691bcce13f57fb173aee30c592
b6e4b98fad3681691bcce13f57fb173aee30c592_0
Q: How they compute similarity between the representations? Text: Introduction Language understanding is a task proving difficult to automatize, because, among other factors, much of the information that is needed for the correct interpretation of an utterance is not explicit in text BIBREF0. This contrasts with how natural is language understanding for humans, who can cope easily with information absent in text, using common sense and background knowledge like, for instance, typical spatial relations between objects. From another perspective, it is well-known that the visual modality provides complementary information to that in the text. In fact, recent advances in deep learning research have led the field of computer vision and natural language processing to significant progress in tasks that involve visual and textual understanding. Tasks that include visual and textual content include Image Captioning BIBREF1, Visual Question Answering BIBREF2, and Visual Machine Translation BIBREF3, among others. On the other hand, progress in language understanding has been driven by datasets which measure the quality of sentence representations, specially those where inference tasks are performed on top of sentence representations, including textual entailment BIBREF4, BIBREF5 and semantic textual similarity (STS). In STS BIBREF6, for instance, pairs of sentences have been annotated with similarity scores, with top scores for semantically equivalent sentences and bottom scores for completely unrelated sentences. STS provides a unified framework for extrinsic evaluation of multiple semantic aspects such as compositionality and phrase similarity. Contrary to related tasks, such as textual entailment and paraphrase detection, STS incorporates the notion of graded semantic similarity between the pair of textual sentences and is symmetric. In this paper we extend STS to the visual modality, and present Visual Semantic Textual Similarity (vSTS), a task and dataset which allows to study whether better sentence representations can be built when having access to the corresponding images, in contrast with having access to the text alone. Similar to STS, annotators were asked to score the similarity between two items, but in this case each item comprises an image and a textual caption. Systems need to predict the human score. Figure FIGREF1 shows an instance in the dataset, with similarity scores in the captions. The example illustrates the need to re-score the similarity values, as the text-only similarity is not applicable to the multimodal version of the dataset: the annotators return a low similarity when using only text, while, when having access to the corresponding image, they return a high similarity. Although a dataset for multimodal inference exists (visual textual entailment BIBREF7) that dataset reused the text-only inference labels. The vSTS dataset aims to become a standard benchmark to test the contribution of visual information when evaluating the similarity of sentences and the quality of multimodal representations, allowing to test the complementarity of visual and textual information for improved language understanding. Although multimodal tasks such as image captioning, visual question answering and visual machine translation already show that the combination of both modalities can be effectively used, those tasks do not separately benchmark the inference capabilities of multimodal visual and textual representations. We evaluate a variety of well-known textual, visual and multimodal representations in supervised and unsupervised scenarios, and systematically explore if visual content is useful for sentence similarity. For text, we studied pre-trained word embeddings such as GloVe BIBREF8, pre-trained language models like GPT-2 and BERT BIBREF9, BIBREF10, sentence representations fine-tuned on an entailment task like USE BIBREF11, and textual representations pre-trained on a multimodal caption retrieval task like VSE++ BIBREF12. For image representation we use a model pre-trained on Imagenet (ResNet BIBREF13). In order to combine visual and textual representations we used concatenation and learn simple projections. Our experiments show that the text-only models are outperformed by their multimodal counterparts when adding visual representations, with up to 24% error reduction. Our contributions are the following: (1) We present a dataset which allows to evaluate visual/textual representations on an inference task. The dataset is publicly available under a free license. (2) Our results show, for the first time, that the addition of image representations allows better inference. (3) The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. (4) The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. At the same time the improvement holds for all textual representations, even those fine-tuned on a similarity task. Related Work The task of Visual Semantic Textual Similarity stems from previous work on textual inference tasks. In textual entailment, given a textual premise and a textual hypothesis, systems need to decide whether the first entails the second, they are in contradiction, or none of the previous BIBREF4. Popular datasets include the Stanford Natural Language Inference dataset BIBREF5. As an alternative to entailment, STS datasets comprise pairs of sentences which have been annotated with similarity scores. STS systems are usually evaluated on the STS benchmark dataset BIBREF6. In this paper we present an extension of STS, so we present the task in more detail in the next section. Textual entailment has been recently extended with visual information. A dataset for visual textual entailment was presented in BIBREF7. Even if the task is different from the text-only counterpart, they reused the text-only inference ground-truth labels without re-annotating them. In fact, they annotate a small sample to show that the labels change. In addition, their dataset tested pairs of text snippets referring to a single image, and it was only useful for testing grounding techniques, but not to measure the complementarity of visual and textual representations. The reported results did not show that grounding improves results, while our study shows that the inference capabilities of multimodal visual and textual representations improve over text-only representations. In related work, BIBREF14 propose visual entailment, where the premise is an image and the hypothesis is textual. The chosen setting does not allow to test the contribution of multimodal representationn with respect to unimodal ones. The complementarity of visual and text representations for improved language understanding was first proven on word representations, where word embeddings were combined with visual or perceptual input to produce multimodal representations BIBREF15. The task of Visual Semantic Textual Similarity is also related to other multimodal tasks such as Image Captioning BIBREF16, BIBREF17, Text-Image Retrieval BIBREF18, BIBREF19 and Visual Question Answering BIBREF2. Image Captioning is a task that aims to generate a description of a given image. The task is related to ours in that it is required an understanding of the scene depicted in the image, so the system can generate an accurate description of it. Unlike vSTS, image captioning is a generation task in which evaluation is challenging and unclear, as the defined automatic metrics are somewhat problematic BIBREF20. On the other hand, Text-Image Retrieval task requires to find similarities and differences of the items in two modalities, so we can distinguish relevant and irrelevant texts and images regarding the query. Apart from not checking inference explicitly, the other main difference with regards to vSTS is that, in retrieval, items are ranked from most to least similar, whereas the vSTS task consists on scoring an accurate real valued similarity. A comprehensive overview is out of the scope, and thus we focus on the most related vision and language tasks. We refer the reader to BIBREF21 for a survey on vision and language research. Many of these tasks can be considered as extensions of previously existing NLP taks. For instance, Image Captioning can be seen as an extension of conditional language modeling BIBREF22 or natural language generation BIBREF23, whereas Visual Question Answering is a natural counterpart of the traditional Question Answering in NLP. Regarding multimodal and unimodal representation learning, convolutional neural networks (CNN) have become the standard architecture for generating representations for images BIBREF24. Most of these models learn transferable general image features in tasks such as image classification, and detection, semantic segmentation, and action recognition. Most used transferable global image representations are learned with deep CNN architectures such as AlexNet BIBREF25, VGG BIBREF26, Inception-v3 BIBREF27, and ResNet BIBREF13 using large datasets such as ImageNet BIBREF1, MSCOCO BIBREF28 and Visual Genome BIBREF29. Recently, Graph Convolution Networks (GCN) showed to be promising way to distill multiple input types multimodal representations BIBREF30. Language representation is mostly done with pretrained word embeddings like Glove BIBREF8 and sequence learning techniques such as Recurrent Neural Networks (RNN) BIBREF31. Recently, self-attention approaches like Transformers BIBREF32 provided transferable models (BERT, GPT-2, among others BIBREF9, BIBREF10) that significantly improve many state-of-the-art tasks in NLP. Alternatively, sentence representations have been fine-tuned on an entailment task BIBREF11. We will present those used in our work in more detail below. The Visual STS Dataset STS assesses the degree to which two sentences are semantically equivalent to each other. The annotators measure the similarity among sentences, with higher scores for more similar sentences. The annotations of similarity were guided by the scale in Table TABREF4, ranging from 0 for no meaning overlap to 5 for meaning equivalence. Intermediate values reflect interpretable levels of partial overlap in meaning. In this work, we extend the STS task with images, providing visual information that models use, and assess how much visual content can contribute in a language understanding task. The input of the task now consists of two items, each comprising an image and its corresponding caption. In the same way as in STS, systems need to score the similarity of the sentences with the help of the images. Figure FIGREF1 shows an example of an instance in the dataset. In previous work reported in a non-archival workshop paper BIBREF33, we presented a preliminary dataset which used the text-only ground-truth similarity scores. The 819 pairs were extracted from a subset of the STS benchmark, more specifically, the so called STS-images subset, which contains pairs of captions with access to images from PASCAL VOC-2008 BIBREF34 and Flickr-8K BIBREF35. Our manual analysis, including examples like Figure FIGREF1, showed that in many cases the text-only ground truth was not valid, so we decided to re-annotated the dataset but showing the images in addition to the captions (the methodology is identical to the AMT annotation method mentioned below). The correlation of the new annotations with regard to the old ones was high (0.9$\rho $) showing that the change in scores was not drastic, but that annotations did differ. The annotators tended to return higher similarity scores, as the mean similarity score across the dataset increased from 1.7 to 2.1. The inter-tagger correlation was comparable to the text-only task, showing that the new annotation task was well-defined. From another perspective, the fact that we could only extract 819 pairs from existing STS datasets showed the need to sample new pairs from other image-caption datasets. In order to be effective in measuring the quality of multimodal representations, we defined the following desiderata for the new dataset: (1) Following STS datasets, the similarity values need to be balanced, showing a uniform distribution; (2) Paired images have to be different to avoid making the task trivial, as hand analysis of image-caption datasets showed that two captions of the same image tended to be paraphrases of each other; (3) The images should not be present in more than one instance, to avoid biases in the visual side; (4) It has to contain a wide variety of images so we can draw stronger conclusions. The preliminary dataset fulfilled 2 and 3, but the dataset was skewed towards low similarity values and the variety was limited. The Visual STS Dataset ::: Data Collection The data collection of sentence-image pairs comprised several steps, including the selection of pairs to be annotated, the annotation methodology, and a final filtering stage. The Visual STS Dataset ::: Data Collection ::: 1. Sampling data for manual annotation. We make use of two well-known image-caption datasets. On one hand, Flickr30K dataset BIBREF36 that has about 30K images with 5 manually generated captions per image. On the other hand, we use the Microsoft COCO dataset BIBREF28, which contains more than 120K images and 5 captions per image. Using both sources we hope to cover a wide variety of images. In order to select pairs of instances, we did two sampling rounds. The goal of the first run is to gather a large number of varied image pairs with their captions which contain interesting pairs. We started by sampling images. We then combined two ways of sampling pairs of images. In the first, we generated pairs by sampling the images randomly. This way, we ensure higher variety of paired scenes, but presumably two captions paired at random will tend to have very low similarity. In the second, we paired images taking into account their visual similarity, ensuring the selection of related scenes with a higher similarity rate. We used the cosine distance of the top-layer of a pretrained ResNet-50 BIBREF13 to compute the similarity of images. We collected an equal number of pairs for the random and visual similarity strategy, gathering, in total, $155,068$ pairs. As each image has 5 captions, we had to select one caption for each image, and we decided to select the two captions with highest word overlap. This way, we get more balanced samples in terms of caption similarity. The initial sampling created thousands of pairs that were skewed towards very low similarity values. Given that manual annotation is a costly process, and with the goal of having a balanced dataset, we used an automatic similarity system to score all the pairs. This text-only similarity system is an ensemble of feature-based machine learning systems that uses a large variety of distance and machine-translation based features. The model was evaluated on a subset of STS benchmark dataset BIBREF6 and compared favorably to other baseline models. As this model is very different from current deep learning techniques, it should not bias the dataset sampling in a way which influences current similarity systems. The automatic scores were used to sample the final set of pairs as follows. We defined five similarity ranges ($(0, 1], \ldots ,(4, 5]$) and randomly selected the same amount of pairs from the initial paired sample. We set a sampling of maximum 3000 instances (i.e 600 instances per range). Given the fact that the high similarity range had less than 600 instances, we collected a total of 2639 potential text-image candidate pairs for manual annotation. Figure FIGREF8 shows the proposed methodology can sample approximately a uniform distribution with the exception of the higher similarity values (left and middle plots). In addition, we show that the lower predicted similarities are mainly coming from random sampling, whereas, as expected, the higher ones come from similar images. The Visual STS Dataset ::: Data Collection ::: 2. Manual annotations. In order to annotate the sample of 2639 pairs, we used Amazon Mechanical Turk (AMT). Crowdworkers followed the same instructions of previous STS annotation campaigns BIBREF6, very similar to those in Table TABREF4. Annotators needed to focus on textual similarity with the aid of aligned images. We got up to 5 scores per item, and we discarded annotators that showed low correlation with the rest of the annotators ($\rho < 0.75$). In total 56 annotators took part. On average each crowdworker annotated 220 pairs, where the amounts ranged from 19 to 940 annotations. Regardless the annotation amounts, most of the annotators showed high correlations with the rest of the participants. We computed the annotation correlation by aggregating the individual Pearson correlation with averaged similarity of the other annotators. The annotation shows high correlation among the crowdworkers ($\rho = 0.89$ $\pm 0.01$) comparable to that of text-only STS datasets. Table TABREF10 shows the average item similarity and item disagreement in the annotation. We defined item disagreement as the standard deviation of the annotated similarity value. The low average similarity can be explained by the high number of zero-similarity pairs. Item disagreement is moderately low (about 0.6 points out of 5) which is in accordance with the high correlation between the annotators. The Visual STS Dataset ::: Data Collection ::: 3. Selection of difficult examples. In preliminary experiments, the evaluation of two baseline models, word overlap and the ensemble system mentioned before, showed that the sampling strategy introduced a large number of trivial examples. For example, the word overlap system attained $0.83$ $\rho $. This high correlation could be the result of using word-overlap in the first sampling round. In order to create a more challenging dataset where to measure the effectiveness of multimodal representations, we defined the easiness metric to filter out some of the easy examples from the annotated dataset. We defined easiness as an amount of discrepancy provided by an example regarding the whole dataset. Taking the inner product of the Pearson correlation formula as basis, we measure the easiness of an annotated example $i$ as follows: where $o_{i}$ is the word-overlap similarity of the $i$-th pair, $\overline{o}$ is the mean overlap similarity in the dataset, and $s_{o}$ is the standard deviation. Similarly, variable $gs_{i}$ is the gold-standard value of the $i$-th pair, and $\overline{gs}$ and $s_{gs}$ are the mean and standard deviation of gold values in the dataset, respectively. We removed 30% of the easiest examples and create a more challenging dataset of 1858 pairs, reducing $\rho $ to $0.57$ for the word-overlap model, and to $0.66$ $\rho $ (from $0.85$) for the ML based approach. The Visual STS Dataset ::: Dataset Description The full dataset comprises both the sample mentioned above and the 819 pairs from our preliminary work, totalling 2677 pairs. Figure FIGREF14 shows the final item similarity distribution. Although the distribution is skewed towards lower similarity values, we consider that all the similarity ranges are sufficiently well covered. Average similarity of the dataset is $1.9$ with a standard deviation of $1.36$ points. The dataset contains 335 zero-valued pairs out of the 2677 instances, which somehow explains the lower average similarity. Evaluation of Representation Models The goal of the evaluation is to explore whether representation models can have access to images, instead of text alone, have better inference abilities. We consider the following models. ResNet BIBREF13 is a deep network of 152 layers in which the residual representation functions are learned instead of learning the signal representation directly. The model is trained over 1.2 million images of ImageNet, the ILSRVC subset of 1000 image categories. We use the top layer of a pretrained ResNet-152 model to represent the images associated to text. Each image is represented with a vector of 2048 dimensions. GloVe. The Global Vector model BIBREF8 is a log-linear model trained to encode semantic relationships between words as vector offsets in the learned vector space, combining global matrix factorization and local context window methods. Since GloVe is a word-level vector model, we build sentence representations with the mean of the vectors of the words composing the sentence. The pre-trained model from GloVe considered in this paper is the 6B-300d, with a vocabulary of 400k words, 300 dimension vectors and trained on a dataset of 6 billion tokens. BERT. The Bidirectional Encoder Representations from Transformer BIBREF9 implements a novel methodology based on the so-called masked language model, which randomly masks some of the tokens from the input, and predicts the original vocabulary id of the masked word based only on its context. The BERT model used in our experiments is the BERT-Large Uncased (24-layer, 1024-hidden, 16-heads, 340M parameters). In order to obtain the sentence-level representation we extract the token embeddings of the last layer and compute the mean vector, yielding a vector of 1024 dimensions. GPT-2. The Generative Pre-Training-2 modelBIBREF10 is a language model based on the transformer architecture, which is trained on the task of predicting the next word, given all the previous words occurring in some text. In the same manner to BERT and GloVe, we extract the token embeddings of the last layer and compute the mean vector to obtain the sentence-level representation of 768 dimensions. The GPT-2 model used in our experiments was trained on a very large corpus of about 40 GB of text data with 1.5 billion parameters. USE. The Universal Sentence Encoder BIBREF11 is a model for encoding sentences into embedding vectors, specifically designed for transfer learning in NLP. Based on a deep averaging network encoder, the model is trained for varying text lengths, such as sentences, phrases or short textbfs, and in a variety of semantic tasks including STS. The encoder returns the vector of the sentence with 512 dimensions. VSE++. The Visual-Semantic Embedding BIBREF12 is a model trained for image-caption retrieval. The model learns a joint space of aligned images and captions. The model is an improvement of the original introduced by BIBREF37, and combines a ResNet-152 over images with a bidirectional Recurrent Neural Network (GRU) over the sentences. Texts and images are projected onto the joint space, obtaining representations of 1024 dimension both for images and texts. We used projections of images and texts in our experiments. The VSE++ model used in our experiments was pre-trained on the Microsoft COCO dataset BIBREF28 and the Flickr30K dataset BIBREF36. Table TABREF15 summarizes the sentence and image representations used in the evaluation. Evaluation of Representation Models ::: Experiments ::: Experimental Setting. We split the vSTS dataset into training, validation and test partitions sampling at random and preserving the overall score distributions. In total, we use 1338 pairs for training, 669 for validation, and the rest of the 670 pairs were used for the final testing. Similar to the STS task, we use the Pearson correlation coefficient ($\rho $) as the evaluation metric of the task. Evaluation of Representation Models ::: Experiments ::: STS models. Our goal is to keep similarity models as simple as possible in order to directly evaluate textual and visual representations and avoid as much as possible the influence of the parameters that intertwine when learning a particular task. We defined two scenarios: the supervised and the unsupervised scenarios. In the supervised scenario we train a Siamese Regression model in a similar way presented in BIBREF38. Given a sentence/image pair, we wish to predict a real-valued similarity in some range $[1,K]$, being $K=5$ in our experiments. We first produce sentence/image representations $h_{L}$ and $h_{R}$ for each sentence in the pair using any of the unimodal models described above, or using a multimodal representations as explained below. Given these representations, we predict the similarity score $o$ using a regression model that takes both the distance and angle between the pair ($h_{L}$, $h_{R}$): Note that the distance and angle concatenation ($[h_{x}, h_{+}]$) yields a $2 * d$-dimensional vector. The resulting vector is used as input for the non-linear hidden layer ($h_{s}$) of the model. Contrary to BIBREF38, we empirically found that the estimation of a continuous value worked better than learning a softmax distribution over $[1,K]$ integer values. The loss function of our model is the Mean Square Error (MSE), which is the most commonly used regression loss function. In the unsupervised scenario similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations. Evaluation of Representation Models ::: Experiments ::: Multimodal representation. We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project). The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function. Evaluation of Representation Models ::: Experiments ::: Hyperparameters and training details. We use the validation set to learn parameters of the supervised models, and to carry an exploration of the hyperparameters. We train each model a maximum of 300 epochs and apply early-stopping strategy with a patience of 25 epochs. For early stopping we monitor MSE loss value on validation. For the rest, we run a grid search for selecting the rest of the hyperparameter values. We explore learning rate values (0.0001, 0.001, 0.01, 0.05), L2 regularization weights (0.0, 0.0001, 0.001, 0.01), and different hidden layer ($h_{s}$) dimensions (50,100, 200, 300). In addition, we activate and deactivate batch normalization in each layer for each of the hyperparameter selection. Evaluation of Representation Models ::: Results ::: The unsupervised scenario. Table TABREF26 reports the results using the item representations directly. We report results over train and dev partitions for completeness, but note that none of them was used to tune the models. As it can be seen, multimodal representations consistently outperform their text-only counterparts. This confirms that, overall, visual information is helpful in the semantic textual similarity task and that image and sentence representation are complementary. For example, the bert model improves more than 13 points when visual information provided by the resnet is concatenated. glove shows a similar or even larger improvement, with similar trends for use and vse++(text). Although vse++(img) shows better performance than resnet when applying them alone, further experimentation showed lower complementarity when combining with textual representation (e.g. $0.807\rho $ in test combining textual and visual modalities of vse++). This is something expected as vse++(img) is pre-trained along with the textual part of the vse++ model on the same task. We do not show the combinations with vse++(img) due to the lack of space. Interestingly, results show that images alone are valid to predict caption similarity ($0.627 \rho $ in test). Actually, in this experimental setting resnet is on par with bert, which is the best purely unsupervised text-only model. Surprisingly, gpt-2 representations are not useful for text similarity tasks. This might be because language models tend to forget past context as they focus on predicting the next token BIBREF39. Due to the low results of gpt-2 we decided not to combine it with resnet. Evaluation of Representation Models ::: Results ::: The supervised scenario. Table TABREF29 show a similar pattern to that in the the unsupervised setting. Overall, models that use a conjunction of multimodal features significantly outperform unimodal models, and this confirms, in a more competitive scenario, that adding visual information helps learning easier the STS task. The gain of multimodal models is considerable compared to the text-only models. The most significant gain is obtained when glove features are combined with resnet. The model improves more than $15.0$ points. In this case, the improvement over bert is lower, but still considerable with more than $4.0$ points. In the same vein as in the unsupervised scenario, features obtained with a resnet can be as competitive as some text based models (e.g. BERT). gpt-2, as in the unsupervised scenario, does not produce useful representations for semantic similarity tasks. Surprisingly, the regression model with gpt-2 features is not able to learn anything in the training set. As we did in the previous scenario, we do not keep combining gpt-2 with visual features. Multimodal version of vse++ and use are the best model among the supervised approaches. Textual version of use and vse++ alone obtain very competitive results and outperforms some of the multimodal models (the concatenate version of glove and bert with resnet). Results might indicate that text-only with sufficient training data can be on par with multimodal models, but, still, when there is data scarcity, multimodal models can perform better as they have more information over the same data point. Comparison between projected and concatenated models show that projected models attain slightly better results in two cases, but the best overall results are obtained when concatenating vse++(text) with resnet. Although concatenation proofs to be a hard baseline, we expect that more sophisticated combination methods like grounding BIBREF40 will obtain larger gains in the future. Discussion ::: Contribution of the Visual Content Table TABREF31 summarizes the contribution of the images on text representations in test partition. The contribution is consistent through all text-based representations. We measure the absolute difference (Diff) and the error reduction (E.R) of each textual representation with the multimodal counterpart. For the comparison we chose the best text model for each representation. As expected we obtain the largest improvement ($22-26\%$ E.R) when text-based unsupervised models are combined with image representations. Note that unsupervised models are not learning anything about the specific task, so the more information in the representation, the better. In the case of use and vse++ the improvement is significant but not as large as the purely unsupervised models. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. Improvement is consistent for the supervised models. Contrary to the unsupervised setting, these models are designed to learn about the task, so there is usually less room for the improvement. Still, glove+resnet shows an error reduction of $12.9$ in the test set. Finally, use and vse++ show smaller improvements when we add visual information into the model. Figure FIGREF32 displays some examples where visual information positively contributes predicting accurately similarity values. Examples show the case where related descriptions are lexicalized in a different way so a text-only model (glove) predicts low similarity between captions (top two examples). Instead, the multimodal representation glove+resnet does have access to the image and can predict more accurately the similarity value of the two captions. The examples in the bottom show the opposite case, where similar set of words are used to describe very different situations. The text based model overestimates the similarity of captions, while the multimodal model corrects the output by looking at the differences of the images. On the contrary, Figure FIGREF33 shows that images can also be misleading, and that the task is not as trivial as combining global representations of the image. In this case, related but different captions are supported by very similar images, and as a consequence, the multimodal model overestimates their similarity, while the text-only model focuses on the most discriminating piece of information in the text. Discussion ::: The effect of hyperparameters Neural models are sensitive to hyperparameters, and we might think that results on the supervised scenario are due to hyperparameter optimization. Figure FIGREF35 displays the variability of $\rho $ in development across all hyperparameters. Due to space constraints we show text-only and multimodal concatenated models. Models are ordered by mean performance. As we can see, combined models show better mean performance, and all models except Glove exhibit tight variability. Conclusions and Future Work The long term goal of our research is to devise multimodal representation techniques that improve current inference capabilities. We have presented a novel task, Visual Semantic Textual Similarity (vSTS), where the inference capabilities of visual, textual, and multimodal representations can be tested directly. The dataset has been manually annotated by crowdsourcers with high inter-annotator correlation ($\rho = 0.89$). We tested several well-known textual and visual representations, which we combined using concatenation and projection. Our results show, for the first time, that the addition of image representations allows better inference. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine- tuned in a text-only inference task like USE. The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. In the future, we would like to ground the text representations to image regions BIBREF40, which could avoid misleading predictions due to the global representation of the image. Finally, we would like to extend the dataset with more examples, as we acknowledge that training set is limited to train larger models. This research was partially funded by the Basque Government excellence research group (IT1343-19), the NVIDIA GPU grant program, the Spanish MINECO (DeepReading RTI2018-096846-B-C21 (MCIU/AEI/FEDER, UE)) and project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018). Ander enjoys a PhD grant from the Basque Government.
similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations
af7a9b56596f90c84f962098f7e836309161badf
af7a9b56596f90c84f962098f7e836309161badf_0
Q: How big is vSTS training data? Text: Introduction Language understanding is a task proving difficult to automatize, because, among other factors, much of the information that is needed for the correct interpretation of an utterance is not explicit in text BIBREF0. This contrasts with how natural is language understanding for humans, who can cope easily with information absent in text, using common sense and background knowledge like, for instance, typical spatial relations between objects. From another perspective, it is well-known that the visual modality provides complementary information to that in the text. In fact, recent advances in deep learning research have led the field of computer vision and natural language processing to significant progress in tasks that involve visual and textual understanding. Tasks that include visual and textual content include Image Captioning BIBREF1, Visual Question Answering BIBREF2, and Visual Machine Translation BIBREF3, among others. On the other hand, progress in language understanding has been driven by datasets which measure the quality of sentence representations, specially those where inference tasks are performed on top of sentence representations, including textual entailment BIBREF4, BIBREF5 and semantic textual similarity (STS). In STS BIBREF6, for instance, pairs of sentences have been annotated with similarity scores, with top scores for semantically equivalent sentences and bottom scores for completely unrelated sentences. STS provides a unified framework for extrinsic evaluation of multiple semantic aspects such as compositionality and phrase similarity. Contrary to related tasks, such as textual entailment and paraphrase detection, STS incorporates the notion of graded semantic similarity between the pair of textual sentences and is symmetric. In this paper we extend STS to the visual modality, and present Visual Semantic Textual Similarity (vSTS), a task and dataset which allows to study whether better sentence representations can be built when having access to the corresponding images, in contrast with having access to the text alone. Similar to STS, annotators were asked to score the similarity between two items, but in this case each item comprises an image and a textual caption. Systems need to predict the human score. Figure FIGREF1 shows an instance in the dataset, with similarity scores in the captions. The example illustrates the need to re-score the similarity values, as the text-only similarity is not applicable to the multimodal version of the dataset: the annotators return a low similarity when using only text, while, when having access to the corresponding image, they return a high similarity. Although a dataset for multimodal inference exists (visual textual entailment BIBREF7) that dataset reused the text-only inference labels. The vSTS dataset aims to become a standard benchmark to test the contribution of visual information when evaluating the similarity of sentences and the quality of multimodal representations, allowing to test the complementarity of visual and textual information for improved language understanding. Although multimodal tasks such as image captioning, visual question answering and visual machine translation already show that the combination of both modalities can be effectively used, those tasks do not separately benchmark the inference capabilities of multimodal visual and textual representations. We evaluate a variety of well-known textual, visual and multimodal representations in supervised and unsupervised scenarios, and systematically explore if visual content is useful for sentence similarity. For text, we studied pre-trained word embeddings such as GloVe BIBREF8, pre-trained language models like GPT-2 and BERT BIBREF9, BIBREF10, sentence representations fine-tuned on an entailment task like USE BIBREF11, and textual representations pre-trained on a multimodal caption retrieval task like VSE++ BIBREF12. For image representation we use a model pre-trained on Imagenet (ResNet BIBREF13). In order to combine visual and textual representations we used concatenation and learn simple projections. Our experiments show that the text-only models are outperformed by their multimodal counterparts when adding visual representations, with up to 24% error reduction. Our contributions are the following: (1) We present a dataset which allows to evaluate visual/textual representations on an inference task. The dataset is publicly available under a free license. (2) Our results show, for the first time, that the addition of image representations allows better inference. (3) The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. (4) The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. At the same time the improvement holds for all textual representations, even those fine-tuned on a similarity task. Related Work The task of Visual Semantic Textual Similarity stems from previous work on textual inference tasks. In textual entailment, given a textual premise and a textual hypothesis, systems need to decide whether the first entails the second, they are in contradiction, or none of the previous BIBREF4. Popular datasets include the Stanford Natural Language Inference dataset BIBREF5. As an alternative to entailment, STS datasets comprise pairs of sentences which have been annotated with similarity scores. STS systems are usually evaluated on the STS benchmark dataset BIBREF6. In this paper we present an extension of STS, so we present the task in more detail in the next section. Textual entailment has been recently extended with visual information. A dataset for visual textual entailment was presented in BIBREF7. Even if the task is different from the text-only counterpart, they reused the text-only inference ground-truth labels without re-annotating them. In fact, they annotate a small sample to show that the labels change. In addition, their dataset tested pairs of text snippets referring to a single image, and it was only useful for testing grounding techniques, but not to measure the complementarity of visual and textual representations. The reported results did not show that grounding improves results, while our study shows that the inference capabilities of multimodal visual and textual representations improve over text-only representations. In related work, BIBREF14 propose visual entailment, where the premise is an image and the hypothesis is textual. The chosen setting does not allow to test the contribution of multimodal representationn with respect to unimodal ones. The complementarity of visual and text representations for improved language understanding was first proven on word representations, where word embeddings were combined with visual or perceptual input to produce multimodal representations BIBREF15. The task of Visual Semantic Textual Similarity is also related to other multimodal tasks such as Image Captioning BIBREF16, BIBREF17, Text-Image Retrieval BIBREF18, BIBREF19 and Visual Question Answering BIBREF2. Image Captioning is a task that aims to generate a description of a given image. The task is related to ours in that it is required an understanding of the scene depicted in the image, so the system can generate an accurate description of it. Unlike vSTS, image captioning is a generation task in which evaluation is challenging and unclear, as the defined automatic metrics are somewhat problematic BIBREF20. On the other hand, Text-Image Retrieval task requires to find similarities and differences of the items in two modalities, so we can distinguish relevant and irrelevant texts and images regarding the query. Apart from not checking inference explicitly, the other main difference with regards to vSTS is that, in retrieval, items are ranked from most to least similar, whereas the vSTS task consists on scoring an accurate real valued similarity. A comprehensive overview is out of the scope, and thus we focus on the most related vision and language tasks. We refer the reader to BIBREF21 for a survey on vision and language research. Many of these tasks can be considered as extensions of previously existing NLP taks. For instance, Image Captioning can be seen as an extension of conditional language modeling BIBREF22 or natural language generation BIBREF23, whereas Visual Question Answering is a natural counterpart of the traditional Question Answering in NLP. Regarding multimodal and unimodal representation learning, convolutional neural networks (CNN) have become the standard architecture for generating representations for images BIBREF24. Most of these models learn transferable general image features in tasks such as image classification, and detection, semantic segmentation, and action recognition. Most used transferable global image representations are learned with deep CNN architectures such as AlexNet BIBREF25, VGG BIBREF26, Inception-v3 BIBREF27, and ResNet BIBREF13 using large datasets such as ImageNet BIBREF1, MSCOCO BIBREF28 and Visual Genome BIBREF29. Recently, Graph Convolution Networks (GCN) showed to be promising way to distill multiple input types multimodal representations BIBREF30. Language representation is mostly done with pretrained word embeddings like Glove BIBREF8 and sequence learning techniques such as Recurrent Neural Networks (RNN) BIBREF31. Recently, self-attention approaches like Transformers BIBREF32 provided transferable models (BERT, GPT-2, among others BIBREF9, BIBREF10) that significantly improve many state-of-the-art tasks in NLP. Alternatively, sentence representations have been fine-tuned on an entailment task BIBREF11. We will present those used in our work in more detail below. The Visual STS Dataset STS assesses the degree to which two sentences are semantically equivalent to each other. The annotators measure the similarity among sentences, with higher scores for more similar sentences. The annotations of similarity were guided by the scale in Table TABREF4, ranging from 0 for no meaning overlap to 5 for meaning equivalence. Intermediate values reflect interpretable levels of partial overlap in meaning. In this work, we extend the STS task with images, providing visual information that models use, and assess how much visual content can contribute in a language understanding task. The input of the task now consists of two items, each comprising an image and its corresponding caption. In the same way as in STS, systems need to score the similarity of the sentences with the help of the images. Figure FIGREF1 shows an example of an instance in the dataset. In previous work reported in a non-archival workshop paper BIBREF33, we presented a preliminary dataset which used the text-only ground-truth similarity scores. The 819 pairs were extracted from a subset of the STS benchmark, more specifically, the so called STS-images subset, which contains pairs of captions with access to images from PASCAL VOC-2008 BIBREF34 and Flickr-8K BIBREF35. Our manual analysis, including examples like Figure FIGREF1, showed that in many cases the text-only ground truth was not valid, so we decided to re-annotated the dataset but showing the images in addition to the captions (the methodology is identical to the AMT annotation method mentioned below). The correlation of the new annotations with regard to the old ones was high (0.9$\rho $) showing that the change in scores was not drastic, but that annotations did differ. The annotators tended to return higher similarity scores, as the mean similarity score across the dataset increased from 1.7 to 2.1. The inter-tagger correlation was comparable to the text-only task, showing that the new annotation task was well-defined. From another perspective, the fact that we could only extract 819 pairs from existing STS datasets showed the need to sample new pairs from other image-caption datasets. In order to be effective in measuring the quality of multimodal representations, we defined the following desiderata for the new dataset: (1) Following STS datasets, the similarity values need to be balanced, showing a uniform distribution; (2) Paired images have to be different to avoid making the task trivial, as hand analysis of image-caption datasets showed that two captions of the same image tended to be paraphrases of each other; (3) The images should not be present in more than one instance, to avoid biases in the visual side; (4) It has to contain a wide variety of images so we can draw stronger conclusions. The preliminary dataset fulfilled 2 and 3, but the dataset was skewed towards low similarity values and the variety was limited. The Visual STS Dataset ::: Data Collection The data collection of sentence-image pairs comprised several steps, including the selection of pairs to be annotated, the annotation methodology, and a final filtering stage. The Visual STS Dataset ::: Data Collection ::: 1. Sampling data for manual annotation. We make use of two well-known image-caption datasets. On one hand, Flickr30K dataset BIBREF36 that has about 30K images with 5 manually generated captions per image. On the other hand, we use the Microsoft COCO dataset BIBREF28, which contains more than 120K images and 5 captions per image. Using both sources we hope to cover a wide variety of images. In order to select pairs of instances, we did two sampling rounds. The goal of the first run is to gather a large number of varied image pairs with their captions which contain interesting pairs. We started by sampling images. We then combined two ways of sampling pairs of images. In the first, we generated pairs by sampling the images randomly. This way, we ensure higher variety of paired scenes, but presumably two captions paired at random will tend to have very low similarity. In the second, we paired images taking into account their visual similarity, ensuring the selection of related scenes with a higher similarity rate. We used the cosine distance of the top-layer of a pretrained ResNet-50 BIBREF13 to compute the similarity of images. We collected an equal number of pairs for the random and visual similarity strategy, gathering, in total, $155,068$ pairs. As each image has 5 captions, we had to select one caption for each image, and we decided to select the two captions with highest word overlap. This way, we get more balanced samples in terms of caption similarity. The initial sampling created thousands of pairs that were skewed towards very low similarity values. Given that manual annotation is a costly process, and with the goal of having a balanced dataset, we used an automatic similarity system to score all the pairs. This text-only similarity system is an ensemble of feature-based machine learning systems that uses a large variety of distance and machine-translation based features. The model was evaluated on a subset of STS benchmark dataset BIBREF6 and compared favorably to other baseline models. As this model is very different from current deep learning techniques, it should not bias the dataset sampling in a way which influences current similarity systems. The automatic scores were used to sample the final set of pairs as follows. We defined five similarity ranges ($(0, 1], \ldots ,(4, 5]$) and randomly selected the same amount of pairs from the initial paired sample. We set a sampling of maximum 3000 instances (i.e 600 instances per range). Given the fact that the high similarity range had less than 600 instances, we collected a total of 2639 potential text-image candidate pairs for manual annotation. Figure FIGREF8 shows the proposed methodology can sample approximately a uniform distribution with the exception of the higher similarity values (left and middle plots). In addition, we show that the lower predicted similarities are mainly coming from random sampling, whereas, as expected, the higher ones come from similar images. The Visual STS Dataset ::: Data Collection ::: 2. Manual annotations. In order to annotate the sample of 2639 pairs, we used Amazon Mechanical Turk (AMT). Crowdworkers followed the same instructions of previous STS annotation campaigns BIBREF6, very similar to those in Table TABREF4. Annotators needed to focus on textual similarity with the aid of aligned images. We got up to 5 scores per item, and we discarded annotators that showed low correlation with the rest of the annotators ($\rho < 0.75$). In total 56 annotators took part. On average each crowdworker annotated 220 pairs, where the amounts ranged from 19 to 940 annotations. Regardless the annotation amounts, most of the annotators showed high correlations with the rest of the participants. We computed the annotation correlation by aggregating the individual Pearson correlation with averaged similarity of the other annotators. The annotation shows high correlation among the crowdworkers ($\rho = 0.89$ $\pm 0.01$) comparable to that of text-only STS datasets. Table TABREF10 shows the average item similarity and item disagreement in the annotation. We defined item disagreement as the standard deviation of the annotated similarity value. The low average similarity can be explained by the high number of zero-similarity pairs. Item disagreement is moderately low (about 0.6 points out of 5) which is in accordance with the high correlation between the annotators. The Visual STS Dataset ::: Data Collection ::: 3. Selection of difficult examples. In preliminary experiments, the evaluation of two baseline models, word overlap and the ensemble system mentioned before, showed that the sampling strategy introduced a large number of trivial examples. For example, the word overlap system attained $0.83$ $\rho $. This high correlation could be the result of using word-overlap in the first sampling round. In order to create a more challenging dataset where to measure the effectiveness of multimodal representations, we defined the easiness metric to filter out some of the easy examples from the annotated dataset. We defined easiness as an amount of discrepancy provided by an example regarding the whole dataset. Taking the inner product of the Pearson correlation formula as basis, we measure the easiness of an annotated example $i$ as follows: where $o_{i}$ is the word-overlap similarity of the $i$-th pair, $\overline{o}$ is the mean overlap similarity in the dataset, and $s_{o}$ is the standard deviation. Similarly, variable $gs_{i}$ is the gold-standard value of the $i$-th pair, and $\overline{gs}$ and $s_{gs}$ are the mean and standard deviation of gold values in the dataset, respectively. We removed 30% of the easiest examples and create a more challenging dataset of 1858 pairs, reducing $\rho $ to $0.57$ for the word-overlap model, and to $0.66$ $\rho $ (from $0.85$) for the ML based approach. The Visual STS Dataset ::: Dataset Description The full dataset comprises both the sample mentioned above and the 819 pairs from our preliminary work, totalling 2677 pairs. Figure FIGREF14 shows the final item similarity distribution. Although the distribution is skewed towards lower similarity values, we consider that all the similarity ranges are sufficiently well covered. Average similarity of the dataset is $1.9$ with a standard deviation of $1.36$ points. The dataset contains 335 zero-valued pairs out of the 2677 instances, which somehow explains the lower average similarity. Evaluation of Representation Models The goal of the evaluation is to explore whether representation models can have access to images, instead of text alone, have better inference abilities. We consider the following models. ResNet BIBREF13 is a deep network of 152 layers in which the residual representation functions are learned instead of learning the signal representation directly. The model is trained over 1.2 million images of ImageNet, the ILSRVC subset of 1000 image categories. We use the top layer of a pretrained ResNet-152 model to represent the images associated to text. Each image is represented with a vector of 2048 dimensions. GloVe. The Global Vector model BIBREF8 is a log-linear model trained to encode semantic relationships between words as vector offsets in the learned vector space, combining global matrix factorization and local context window methods. Since GloVe is a word-level vector model, we build sentence representations with the mean of the vectors of the words composing the sentence. The pre-trained model from GloVe considered in this paper is the 6B-300d, with a vocabulary of 400k words, 300 dimension vectors and trained on a dataset of 6 billion tokens. BERT. The Bidirectional Encoder Representations from Transformer BIBREF9 implements a novel methodology based on the so-called masked language model, which randomly masks some of the tokens from the input, and predicts the original vocabulary id of the masked word based only on its context. The BERT model used in our experiments is the BERT-Large Uncased (24-layer, 1024-hidden, 16-heads, 340M parameters). In order to obtain the sentence-level representation we extract the token embeddings of the last layer and compute the mean vector, yielding a vector of 1024 dimensions. GPT-2. The Generative Pre-Training-2 modelBIBREF10 is a language model based on the transformer architecture, which is trained on the task of predicting the next word, given all the previous words occurring in some text. In the same manner to BERT and GloVe, we extract the token embeddings of the last layer and compute the mean vector to obtain the sentence-level representation of 768 dimensions. The GPT-2 model used in our experiments was trained on a very large corpus of about 40 GB of text data with 1.5 billion parameters. USE. The Universal Sentence Encoder BIBREF11 is a model for encoding sentences into embedding vectors, specifically designed for transfer learning in NLP. Based on a deep averaging network encoder, the model is trained for varying text lengths, such as sentences, phrases or short textbfs, and in a variety of semantic tasks including STS. The encoder returns the vector of the sentence with 512 dimensions. VSE++. The Visual-Semantic Embedding BIBREF12 is a model trained for image-caption retrieval. The model learns a joint space of aligned images and captions. The model is an improvement of the original introduced by BIBREF37, and combines a ResNet-152 over images with a bidirectional Recurrent Neural Network (GRU) over the sentences. Texts and images are projected onto the joint space, obtaining representations of 1024 dimension both for images and texts. We used projections of images and texts in our experiments. The VSE++ model used in our experiments was pre-trained on the Microsoft COCO dataset BIBREF28 and the Flickr30K dataset BIBREF36. Table TABREF15 summarizes the sentence and image representations used in the evaluation. Evaluation of Representation Models ::: Experiments ::: Experimental Setting. We split the vSTS dataset into training, validation and test partitions sampling at random and preserving the overall score distributions. In total, we use 1338 pairs for training, 669 for validation, and the rest of the 670 pairs were used for the final testing. Similar to the STS task, we use the Pearson correlation coefficient ($\rho $) as the evaluation metric of the task. Evaluation of Representation Models ::: Experiments ::: STS models. Our goal is to keep similarity models as simple as possible in order to directly evaluate textual and visual representations and avoid as much as possible the influence of the parameters that intertwine when learning a particular task. We defined two scenarios: the supervised and the unsupervised scenarios. In the supervised scenario we train a Siamese Regression model in a similar way presented in BIBREF38. Given a sentence/image pair, we wish to predict a real-valued similarity in some range $[1,K]$, being $K=5$ in our experiments. We first produce sentence/image representations $h_{L}$ and $h_{R}$ for each sentence in the pair using any of the unimodal models described above, or using a multimodal representations as explained below. Given these representations, we predict the similarity score $o$ using a regression model that takes both the distance and angle between the pair ($h_{L}$, $h_{R}$): Note that the distance and angle concatenation ($[h_{x}, h_{+}]$) yields a $2 * d$-dimensional vector. The resulting vector is used as input for the non-linear hidden layer ($h_{s}$) of the model. Contrary to BIBREF38, we empirically found that the estimation of a continuous value worked better than learning a softmax distribution over $[1,K]$ integer values. The loss function of our model is the Mean Square Error (MSE), which is the most commonly used regression loss function. In the unsupervised scenario similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations. Evaluation of Representation Models ::: Experiments ::: Multimodal representation. We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project). The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function. Evaluation of Representation Models ::: Experiments ::: Hyperparameters and training details. We use the validation set to learn parameters of the supervised models, and to carry an exploration of the hyperparameters. We train each model a maximum of 300 epochs and apply early-stopping strategy with a patience of 25 epochs. For early stopping we monitor MSE loss value on validation. For the rest, we run a grid search for selecting the rest of the hyperparameter values. We explore learning rate values (0.0001, 0.001, 0.01, 0.05), L2 regularization weights (0.0, 0.0001, 0.001, 0.01), and different hidden layer ($h_{s}$) dimensions (50,100, 200, 300). In addition, we activate and deactivate batch normalization in each layer for each of the hyperparameter selection. Evaluation of Representation Models ::: Results ::: The unsupervised scenario. Table TABREF26 reports the results using the item representations directly. We report results over train and dev partitions for completeness, but note that none of them was used to tune the models. As it can be seen, multimodal representations consistently outperform their text-only counterparts. This confirms that, overall, visual information is helpful in the semantic textual similarity task and that image and sentence representation are complementary. For example, the bert model improves more than 13 points when visual information provided by the resnet is concatenated. glove shows a similar or even larger improvement, with similar trends for use and vse++(text). Although vse++(img) shows better performance than resnet when applying them alone, further experimentation showed lower complementarity when combining with textual representation (e.g. $0.807\rho $ in test combining textual and visual modalities of vse++). This is something expected as vse++(img) is pre-trained along with the textual part of the vse++ model on the same task. We do not show the combinations with vse++(img) due to the lack of space. Interestingly, results show that images alone are valid to predict caption similarity ($0.627 \rho $ in test). Actually, in this experimental setting resnet is on par with bert, which is the best purely unsupervised text-only model. Surprisingly, gpt-2 representations are not useful for text similarity tasks. This might be because language models tend to forget past context as they focus on predicting the next token BIBREF39. Due to the low results of gpt-2 we decided not to combine it with resnet. Evaluation of Representation Models ::: Results ::: The supervised scenario. Table TABREF29 show a similar pattern to that in the the unsupervised setting. Overall, models that use a conjunction of multimodal features significantly outperform unimodal models, and this confirms, in a more competitive scenario, that adding visual information helps learning easier the STS task. The gain of multimodal models is considerable compared to the text-only models. The most significant gain is obtained when glove features are combined with resnet. The model improves more than $15.0$ points. In this case, the improvement over bert is lower, but still considerable with more than $4.0$ points. In the same vein as in the unsupervised scenario, features obtained with a resnet can be as competitive as some text based models (e.g. BERT). gpt-2, as in the unsupervised scenario, does not produce useful representations for semantic similarity tasks. Surprisingly, the regression model with gpt-2 features is not able to learn anything in the training set. As we did in the previous scenario, we do not keep combining gpt-2 with visual features. Multimodal version of vse++ and use are the best model among the supervised approaches. Textual version of use and vse++ alone obtain very competitive results and outperforms some of the multimodal models (the concatenate version of glove and bert with resnet). Results might indicate that text-only with sufficient training data can be on par with multimodal models, but, still, when there is data scarcity, multimodal models can perform better as they have more information over the same data point. Comparison between projected and concatenated models show that projected models attain slightly better results in two cases, but the best overall results are obtained when concatenating vse++(text) with resnet. Although concatenation proofs to be a hard baseline, we expect that more sophisticated combination methods like grounding BIBREF40 will obtain larger gains in the future. Discussion ::: Contribution of the Visual Content Table TABREF31 summarizes the contribution of the images on text representations in test partition. The contribution is consistent through all text-based representations. We measure the absolute difference (Diff) and the error reduction (E.R) of each textual representation with the multimodal counterpart. For the comparison we chose the best text model for each representation. As expected we obtain the largest improvement ($22-26\%$ E.R) when text-based unsupervised models are combined with image representations. Note that unsupervised models are not learning anything about the specific task, so the more information in the representation, the better. In the case of use and vse++ the improvement is significant but not as large as the purely unsupervised models. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. Improvement is consistent for the supervised models. Contrary to the unsupervised setting, these models are designed to learn about the task, so there is usually less room for the improvement. Still, glove+resnet shows an error reduction of $12.9$ in the test set. Finally, use and vse++ show smaller improvements when we add visual information into the model. Figure FIGREF32 displays some examples where visual information positively contributes predicting accurately similarity values. Examples show the case where related descriptions are lexicalized in a different way so a text-only model (glove) predicts low similarity between captions (top two examples). Instead, the multimodal representation glove+resnet does have access to the image and can predict more accurately the similarity value of the two captions. The examples in the bottom show the opposite case, where similar set of words are used to describe very different situations. The text based model overestimates the similarity of captions, while the multimodal model corrects the output by looking at the differences of the images. On the contrary, Figure FIGREF33 shows that images can also be misleading, and that the task is not as trivial as combining global representations of the image. In this case, related but different captions are supported by very similar images, and as a consequence, the multimodal model overestimates their similarity, while the text-only model focuses on the most discriminating piece of information in the text. Discussion ::: The effect of hyperparameters Neural models are sensitive to hyperparameters, and we might think that results on the supervised scenario are due to hyperparameter optimization. Figure FIGREF35 displays the variability of $\rho $ in development across all hyperparameters. Due to space constraints we show text-only and multimodal concatenated models. Models are ordered by mean performance. As we can see, combined models show better mean performance, and all models except Glove exhibit tight variability. Conclusions and Future Work The long term goal of our research is to devise multimodal representation techniques that improve current inference capabilities. We have presented a novel task, Visual Semantic Textual Similarity (vSTS), where the inference capabilities of visual, textual, and multimodal representations can be tested directly. The dataset has been manually annotated by crowdsourcers with high inter-annotator correlation ($\rho = 0.89$). We tested several well-known textual and visual representations, which we combined using concatenation and projection. Our results show, for the first time, that the addition of image representations allows better inference. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine- tuned in a text-only inference task like USE. The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. In the future, we would like to ground the text representations to image regions BIBREF40, which could avoid misleading predictions due to the global representation of the image. Finally, we would like to extend the dataset with more examples, as we acknowledge that training set is limited to train larger models. This research was partially funded by the Basque Government excellence research group (IT1343-19), the NVIDIA GPU grant program, the Spanish MINECO (DeepReading RTI2018-096846-B-C21 (MCIU/AEI/FEDER, UE)) and project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018). Ander enjoys a PhD grant from the Basque Government.
1338 pairs for training
ba61ed892b4f7930430389e80a0c8e3b701c8e5d
ba61ed892b4f7930430389e80a0c8e3b701c8e5d_0
Q: Which evaluation metrics do they use for language modelling? Text: Introduction Recent success in language modelling and representation learning have largely focused on learning the semantic structures of language BIBREF0. Syntactic information, such as part-of-speech (POS) sequences, is an essential part of language and can be important for tasks such as authorship identification, writing-style analysis, translation, etc. Methods that learn syntactic representations have received relatively less attention, with focus mostly on evaluating the semantic information contained in representations produced by language models. Multilingual embeddings have been shown to achieve top performance in many downstream tasks BIBREF1, BIBREF2. By training over large corpora, these models have shown to generalize to similar but unseen contexts. However, words contain multiple types of information: semantic, syntactic, and morphologic. Therefore, it is possible that syntactically different passages have similar embeddings due to their semantic properties. On tasks like the ones mentioned above, discriminating using patterns that include semantic information may result in poor generalization, specially when datasets are not sufficiently representative. In this work, we study methods that learn sentence-level embeddings that explicitly capture syntactic information. We focus on variations of sequence-to-sequence models BIBREF3, trained using a multilingual corpus with universal part-of-speech (UPOS) tags for the target languages only. By using target-language UPOS tags in the training process, we are able to learn sentence-level embeddings for source languages that lack UPOS tagging data. This property can be leveraged to learn syntactic embeddings for low-resource languages. Our main contributions are: to study whether sentence-level syntactic embeddings can be learned efficiently, to evaluate the structure of the learned embedding space, and to explore the potential of learning syntactic embeddings for low-resource languages. We evaluate the syntactic structure of sentence-level embeddings by performing nearest-neighbour (NN) search in the embedding space. We show that these embeddings exhibit properties that correlate with similarities between UPOS sequences of the original sentences. We also evaluate the embeddings produced by language models such as BERT BIBREF0 and show that they contain some syntactic information. We further explore our method in the few-shot setting for low-resource source languages without large, high quality treebank datasets. We show its transfer-learning capabilities on artificial and real low-resource languages. Lastly, we show that training on multilingual parallel corpora significantly improves the learned syntactic embeddings. This is similar to existing results for models trained (or pre-trained) on multiple languages BIBREF4, BIBREF2 for downstream tasks BIBREF5. Related Work Training semantic embeddings based on multilingual data was studied by MUSE BIBREF1 and LASER BIBREF2 at the word and sentence levels respectively. Multi-task training for disentangling semantic and syntactic information was studied in BIBREF6. This work also used a nearest neighbour method to evaluate the syntactic properties of models, though their focus was on disentanglement rather than embedding quality. The syntactic content of language models was studied by examining syntax trees BIBREF7, subject-object agreement BIBREF8, and evaluation on syntactically altered datasets BIBREF9, BIBREF10. These works did not examine multilingual models. Distant supervision BIBREF11, BIBREF12 has been used to learn POS taggers for low-resource languages using cross-lingual corpora. The goal of these works is to learn word-level POS tags, rather than sentence-level syntactic embeddings. Furthermore, our method does not require explicit POS sequences for the low-resource language, which results in a simpler training process than distant supervision. Method ::: Architecture We iterated upon the model architecture proposed in LASER BIBREF2. The model consists of a two-layer Bi-directional LSTM (BiLSTM) encoder and a single-layer LSTM decoder. The encoder is language agnostic as no language context is provided as input. In contrast to LASER, we use the concatenation of last hidden and cell states of the encoder to initialize the decoder through a linear projection. At each time-step, the decoder takes an embedding of the previous POS target concatenated with an embedding representing the language context, as well as a max-pooling over encoder outputs. Figure FIGREF2 shows the architecture of the proposed model. The input embeddings for the encoder were created using a jointly learned Byte-Pair-Encoding (BPE) vocabulary BIBREF13 for all languages by using sentencepiece. Method ::: Training Training was performed using an aligned parallel corpus. Given a source-target aligned sentence pair (as in machine translation), we: Convert the sentence in the source language into BPE Look up embeddings for BPE as the input to the encoder Convert the sentence in a target language into UPOS tags, in the tagset of the target language. Use the UPOS tags in step 3 as the targets for a cross-entropy loss. Hence, the task is to predict the UPOS sequence computed from the translated input sentence. The UPOS targets were obtained using StandfordNLP BIBREF14 . Dropout with a drop probability of 0.2 was applied to the encoder. The Adam optimizer BIBREF15 was used with a constant learning rate of $0.0001$. Table TABREF4 shows a full list of the hyperparameters used in the training procedure. Method ::: Dataset To create our training dataset, we followed an approach similar to LASER. The dataset contains 6 languages: English, Spanish, German, Dutch, Korean and Chinese Mandarin. These languages use 3 different scripts, 2 different language orderings, and belong to 4 language families. English, Spanish, German, and Dutch use a Latin-based script. However, Spanish is a Romantic language while the others are Germanic languages. Chinese Mandarin and Korean are included because they use non-latin based scripts and originate from language families distinct from the other languages. Although the grammatical rules vary between the selected languages, they share a number of key characteristics such as the Subject-Verb-Object ordering, except Korean (which mainly follows the Subject-Object-Verb order). We hope to extend our work to other languages with different scripts and sentence structures, such as Arabic, Japanese, Hindi, etc. in the future. The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16. They were chosen for their high availability in multiple languages. Statistics of the final training dataset are shown in Table TABREF14. Rows and columns correspond to source and target languages respectively. Method ::: Dataset ::: Tatoeba Tatoeba is a freely available crowd-annotated dataset for language learning. We selected all sentences in English, Spanish, German, Dutch, and Korean. We pruned the dataset to contain only sentences with at least one translation to any of the other languages. The final training set contains 1.36M translation sentence pairs from this source. Method ::: Dataset ::: OpenSubtitles We augmented our training data by using the 2018 OpenSubtitles dataset. OpenSubtitles is a publicly available dataset based on movie subtitles BIBREF16. We created our training dataset from selected aligned subtitles by taking the unique translations among the first million sentences, for each aligned parallel corpus. We further processed the data by pruning to remove samples with less than 3 words, multiple sentences, or incomplete sentences. The resulting dataset contains 1.9M translation sentence pairs from this source. Experiments We aim to address the following questions: Can syntactic structures be embedded? For multiple languages? Can parallel corpora be used to learn syntactic structure for low-resource languages? Does multilingual pre-training improve syntactic embeddings? We address question 1 in Secs. SECREF20 and SECREF28 by evaluating the quality of syntactic and semantic embeddings in several ways. Questions 2 and 3 are addressed in Sec. SECREF30 by studying the transfer-learning performance of syntactic embeddings. Experiments ::: Quality of Syntactic Embeddings We studied the quality of the learned syntactic embeddings by using a nearest-neighbour (NN) method. First, we calculated the UPOS sequence of all sentences in the Tatoeba dataset by using a tagger. Sentences were then assigned to distinct groups according to their UPOS sequence, i.e., all sentences belonging to the same group had the same UPOS sequence. For all languages except Korean, a held-out test set was created by randomly sampling groups that contained at least 6 sentences. For Korean, all groups containing at least 6 sentences were kept as the test set since the dataset is small. During evaluation, we applied max-pooling to the outputs of the encoder to obtain the syntactic embeddings of the held-out sentences. For each syntactic embedding, we find its top nearest neighbour (1-NN) and top-5 nearest neighbours (5-NN) in the embedding space for the held-out sentences, based on their UPOS group. Given $n$ sentences $S = \lbrace s_0, \dots , s_{n-1}\rbrace $ and their embeddings $E = \lbrace e_0, \dots , e_{n-1}\rbrace $, for each $s_i$ there is a set of $k$ gold nearest neighbours $G(i, k) = \lbrace g_0, \dots , g_{k-1}\rbrace $, $G(i, k) \subseteq S$ such that $d(s_i, g) \le d(s_i, s) \textrm { for all } g \in G(i, k) \textrm { and } s \in S \setminus G(i, k)$, where $d(\cdot , \cdot )$ is the cosine distance. Given embedding $e_i$, we calculate cosine distances $\lbrace d(e_i, e_j) \textrm { for } e_j \in E, e_j \ne e_i\rbrace $ and sort them into non-decreasing order $d_{j_0} \le d_{j_1} \le \dots \le d_{j_{n-2}}$. We consider the ordering to be unique as the probability of embedding cosine distances being equal is very small. The set of embedded $k$-nearest neighbours of $s_i$ is defined as Finally, the $k$-nearest neighbours accuracy for $s_i$ is given by A good embedding model should cluster the embeddings for similar inputs in the embedding space. Hence, the 5-NN test can be seen as an indicator of how cohesive the embedding space is. The results are shown in Table TABREF22. The differences in the number of groups in each language are due to different availabilities of sentences and sentence-types in the Tatoeba dataset. The high nearest neighbours accuracy indicates that syntax information was successfully captured by the embeddings. Table TABREF22 also shows that the syntactic information of multiple languages was captured by a single embedding model. Experiments ::: Quality of Syntactic Embeddings ::: Language Model A number of recent works BIBREF7, BIBREF8 have probed language models to determine if they contain syntactic information. We applied the same nearest neighbours experiment (with the same test sets) on a number of existing language models: Universal Sentence Encoder (USE) BIBREF17, LASER, and BERT. For USE we used models available from TensorHub. For LASER we used models and created embeddings from the official repository . For BERT, we report the results using max (BERT$_{max}$) and average-pooling (BERT$_{avg}$), obtained from the BERT embedding toolkit with the multilingual cased model (104 languages, 12-layers, 768-hidden units, 12-heads), and `pooled-output' (BERT$_{output}$) from the TensorHub version of the model with the same parameters. We computed the nearest neighbours experiment for all languages in the training data for the above models. The results are shown in Table TABREF27. The results show that general purpose language models do capture syntax information, which varies greatly across languages and models. The nearest neighbours accuracy of our syntactic embeddings in Table TABREF22 significantly outperforms the general purpose language models. Arguably these language models were trained using different training data. However, this is a reasonable comparison because many real-world applications rely on released pre-trained language models for syntactically related information. Hence, we want to show that we can use much smaller models trained with direct supervision, to obtain syntactic embeddings with similar or better quality. Nonetheless, the training method used in this work can certainly be extended to architectures similar to BERT or USE. Experiments ::: Functional Dissimilarity The experiments in the previous section showed that the proposed syntactic embeddings formed cohesive clusters in the embedding space, based on UPOS sequence similarities. We further studied the spatial relationships within the embeddings. Word2Vec BIBREF18 examined spatial relationships between embeddings and compared them to the semantic relationships between words. Operations on vectors in the embedding space such as $King - Man + Woman = Queen$ created vectors that also correlated with similar operations in semantics. Such semantic comparisons do not directly translate to syntactic embeddings. However, syntax information shifts with edits on POS sequences. Hence, we examined the spatial relationships between syntactic embeddings by comparing their cosine similarities with the edit distances between UPOS sequence pairs. Given $n$ UPOS sequences $U = \lbrace u_0,...,u_{n-1}\rbrace $, we compute the matrix $L \in \mathbb {R}^{n \times n}$, where $l_{ij} = l(u_i, u_j)$, the complement of the normalized Levenshtein distance between $u_i$ and $u_j$. Given the set of embedding vectors $\lbrace e_0,...,e_{n-1}\rbrace $ where $e_i$ is the embedding for sentence $s_i$, we also compute $D \in \mathbb {R}^{n \times n}$, where $d_{ij} = d(e_i, e_j)$. We further normalize $d_{ij}$ to be within $[0, 1]$ by min-max normalization to obtain $\hat{D} = \operatorname{minMax}(D)$. Following BIBREF19, we define the functional dissimilarity score by Intuitively, UPOS sequences that are similar (smaller edit distance) should be embedded close to each other in the embedding space, and embeddings that are further away should have dissimilar UPOS sequences. Hence, the functional dissimilarity score is low if the relative changes in UPOS sequences are reflected in the embedding space. The score is high if such changes are not reflected. The functional dissimilarity score was computed using sentences from the test set in CoNLL 2017 Universal Dependencies task BIBREF20 for the relevant languages with the provided UPOS sequences. Furthermore, none of the evaluated models, including the proposed method, were trained with CoNLL2017 data. We compared the functional dissimilarity scores of our syntactic representations against embeddings obtained from BERT and LASER, to further demonstrate that simple network structures with explicit supervision may be sufficient to capture syntactic structure. All the results are shown in Table TABREF29. We only show the best (lowest) results from BERT. Experiments ::: Transfer Performance of Syntactic Embeddings Many NLP tasks utilize POS as features, but human annotated POS sequences are difficult and expensive to obtain. Thus, it is important to know if we can learn sentences-level syntactic embeddings for low-sources languages without treebanks. We performed zero-shot transfer of the syntactic embeddings for French, Portuguese and Indonesian. French and Portuguese are simulated low-resource languages, while Indonesian is a true low-resource language. We reported the 1-NN and 5-NN accuracies for all languages using the same evaluation setting as described in the previous section. The results are shown in Table TABREF31 (top). We also fine-tuned the learned syntactic embeddings on the low-resource language for a varying number of training data and languages. The results are shown in Table TABREF31 (bottom). In this table, the low-resource language is denoted as the `source', while the high-resource language(s) is denoted as the `target'. With this training method, no UPOS tag information was provided to the model for the `source' languages, where supervising information comes solely from parallel sentences and UPOS tags in high-resource languages. The results show that for a new language (French and Portuguese) that is similar to the family of pre-training languages, there are two ways to achieve higher 1-NN accuracy. If the number of unique sentences in the new language is small, accuracy can be improved by increasing the size of the parallel corpora used to fine-tune. If only one parallel corpus is available, accuracy can be improved by increasing the number of unique sentence-pairs used to fine-tune. For a new language that is dissimilar to the family of pre-training languages, e.g. Indonesian in Table TABREF31, the above methods only improved nearest neighbours accuracy slightly. This may be caused by differing data distribution or by tagger inaccuracies. The results for Indonesian do indicate that some syntactic structure can be learned by using our method, even for a dissimilar language. A future direction is to conduct a rigorous analysis of transfer learning between languages from the same versus different language families. Conclusion We examined the possibility of creating syntactic embeddings by using a multilingual method based on sequence-to-sequence models. In contrast to prior work, our method only requires parallel corpora and UPOS tags in the target language. We studied the quality of learned embeddings by examining nearest neighbours in the embedding space and investigating their functional dissimilarity. These results were compared against recent state-of-the-art language models. We also showed that pre-training with a parallel corpus allowed the syntactic embeddings to be transferred to low-resource languages via few-shot fine-tuning. Our evaluations indicated that syntactic structure can be learnt by using simple network architectures and explicit supervision. Future directions include improving the transfer performance for low-resource languages, disentangling semantic and syntactic embeddings, and analyzing the effect of transfer learning between languages belong to the same versus different language families.
functional dissimilarity score, nearest neighbours experiment
6a566095e25cbb56330456d7a1f3471693817712
6a566095e25cbb56330456d7a1f3471693817712_0
Q: Do they do quantitative quality analysis of learned embeddings? Text: Introduction Recent success in language modelling and representation learning have largely focused on learning the semantic structures of language BIBREF0. Syntactic information, such as part-of-speech (POS) sequences, is an essential part of language and can be important for tasks such as authorship identification, writing-style analysis, translation, etc. Methods that learn syntactic representations have received relatively less attention, with focus mostly on evaluating the semantic information contained in representations produced by language models. Multilingual embeddings have been shown to achieve top performance in many downstream tasks BIBREF1, BIBREF2. By training over large corpora, these models have shown to generalize to similar but unseen contexts. However, words contain multiple types of information: semantic, syntactic, and morphologic. Therefore, it is possible that syntactically different passages have similar embeddings due to their semantic properties. On tasks like the ones mentioned above, discriminating using patterns that include semantic information may result in poor generalization, specially when datasets are not sufficiently representative. In this work, we study methods that learn sentence-level embeddings that explicitly capture syntactic information. We focus on variations of sequence-to-sequence models BIBREF3, trained using a multilingual corpus with universal part-of-speech (UPOS) tags for the target languages only. By using target-language UPOS tags in the training process, we are able to learn sentence-level embeddings for source languages that lack UPOS tagging data. This property can be leveraged to learn syntactic embeddings for low-resource languages. Our main contributions are: to study whether sentence-level syntactic embeddings can be learned efficiently, to evaluate the structure of the learned embedding space, and to explore the potential of learning syntactic embeddings for low-resource languages. We evaluate the syntactic structure of sentence-level embeddings by performing nearest-neighbour (NN) search in the embedding space. We show that these embeddings exhibit properties that correlate with similarities between UPOS sequences of the original sentences. We also evaluate the embeddings produced by language models such as BERT BIBREF0 and show that they contain some syntactic information. We further explore our method in the few-shot setting for low-resource source languages without large, high quality treebank datasets. We show its transfer-learning capabilities on artificial and real low-resource languages. Lastly, we show that training on multilingual parallel corpora significantly improves the learned syntactic embeddings. This is similar to existing results for models trained (or pre-trained) on multiple languages BIBREF4, BIBREF2 for downstream tasks BIBREF5. Related Work Training semantic embeddings based on multilingual data was studied by MUSE BIBREF1 and LASER BIBREF2 at the word and sentence levels respectively. Multi-task training for disentangling semantic and syntactic information was studied in BIBREF6. This work also used a nearest neighbour method to evaluate the syntactic properties of models, though their focus was on disentanglement rather than embedding quality. The syntactic content of language models was studied by examining syntax trees BIBREF7, subject-object agreement BIBREF8, and evaluation on syntactically altered datasets BIBREF9, BIBREF10. These works did not examine multilingual models. Distant supervision BIBREF11, BIBREF12 has been used to learn POS taggers for low-resource languages using cross-lingual corpora. The goal of these works is to learn word-level POS tags, rather than sentence-level syntactic embeddings. Furthermore, our method does not require explicit POS sequences for the low-resource language, which results in a simpler training process than distant supervision. Method ::: Architecture We iterated upon the model architecture proposed in LASER BIBREF2. The model consists of a two-layer Bi-directional LSTM (BiLSTM) encoder and a single-layer LSTM decoder. The encoder is language agnostic as no language context is provided as input. In contrast to LASER, we use the concatenation of last hidden and cell states of the encoder to initialize the decoder through a linear projection. At each time-step, the decoder takes an embedding of the previous POS target concatenated with an embedding representing the language context, as well as a max-pooling over encoder outputs. Figure FIGREF2 shows the architecture of the proposed model. The input embeddings for the encoder were created using a jointly learned Byte-Pair-Encoding (BPE) vocabulary BIBREF13 for all languages by using sentencepiece. Method ::: Training Training was performed using an aligned parallel corpus. Given a source-target aligned sentence pair (as in machine translation), we: Convert the sentence in the source language into BPE Look up embeddings for BPE as the input to the encoder Convert the sentence in a target language into UPOS tags, in the tagset of the target language. Use the UPOS tags in step 3 as the targets for a cross-entropy loss. Hence, the task is to predict the UPOS sequence computed from the translated input sentence. The UPOS targets were obtained using StandfordNLP BIBREF14 . Dropout with a drop probability of 0.2 was applied to the encoder. The Adam optimizer BIBREF15 was used with a constant learning rate of $0.0001$. Table TABREF4 shows a full list of the hyperparameters used in the training procedure. Method ::: Dataset To create our training dataset, we followed an approach similar to LASER. The dataset contains 6 languages: English, Spanish, German, Dutch, Korean and Chinese Mandarin. These languages use 3 different scripts, 2 different language orderings, and belong to 4 language families. English, Spanish, German, and Dutch use a Latin-based script. However, Spanish is a Romantic language while the others are Germanic languages. Chinese Mandarin and Korean are included because they use non-latin based scripts and originate from language families distinct from the other languages. Although the grammatical rules vary between the selected languages, they share a number of key characteristics such as the Subject-Verb-Object ordering, except Korean (which mainly follows the Subject-Object-Verb order). We hope to extend our work to other languages with different scripts and sentence structures, such as Arabic, Japanese, Hindi, etc. in the future. The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16. They were chosen for their high availability in multiple languages. Statistics of the final training dataset are shown in Table TABREF14. Rows and columns correspond to source and target languages respectively. Method ::: Dataset ::: Tatoeba Tatoeba is a freely available crowd-annotated dataset for language learning. We selected all sentences in English, Spanish, German, Dutch, and Korean. We pruned the dataset to contain only sentences with at least one translation to any of the other languages. The final training set contains 1.36M translation sentence pairs from this source. Method ::: Dataset ::: OpenSubtitles We augmented our training data by using the 2018 OpenSubtitles dataset. OpenSubtitles is a publicly available dataset based on movie subtitles BIBREF16. We created our training dataset from selected aligned subtitles by taking the unique translations among the first million sentences, for each aligned parallel corpus. We further processed the data by pruning to remove samples with less than 3 words, multiple sentences, or incomplete sentences. The resulting dataset contains 1.9M translation sentence pairs from this source. Experiments We aim to address the following questions: Can syntactic structures be embedded? For multiple languages? Can parallel corpora be used to learn syntactic structure for low-resource languages? Does multilingual pre-training improve syntactic embeddings? We address question 1 in Secs. SECREF20 and SECREF28 by evaluating the quality of syntactic and semantic embeddings in several ways. Questions 2 and 3 are addressed in Sec. SECREF30 by studying the transfer-learning performance of syntactic embeddings. Experiments ::: Quality of Syntactic Embeddings We studied the quality of the learned syntactic embeddings by using a nearest-neighbour (NN) method. First, we calculated the UPOS sequence of all sentences in the Tatoeba dataset by using a tagger. Sentences were then assigned to distinct groups according to their UPOS sequence, i.e., all sentences belonging to the same group had the same UPOS sequence. For all languages except Korean, a held-out test set was created by randomly sampling groups that contained at least 6 sentences. For Korean, all groups containing at least 6 sentences were kept as the test set since the dataset is small. During evaluation, we applied max-pooling to the outputs of the encoder to obtain the syntactic embeddings of the held-out sentences. For each syntactic embedding, we find its top nearest neighbour (1-NN) and top-5 nearest neighbours (5-NN) in the embedding space for the held-out sentences, based on their UPOS group. Given $n$ sentences $S = \lbrace s_0, \dots , s_{n-1}\rbrace $ and their embeddings $E = \lbrace e_0, \dots , e_{n-1}\rbrace $, for each $s_i$ there is a set of $k$ gold nearest neighbours $G(i, k) = \lbrace g_0, \dots , g_{k-1}\rbrace $, $G(i, k) \subseteq S$ such that $d(s_i, g) \le d(s_i, s) \textrm { for all } g \in G(i, k) \textrm { and } s \in S \setminus G(i, k)$, where $d(\cdot , \cdot )$ is the cosine distance. Given embedding $e_i$, we calculate cosine distances $\lbrace d(e_i, e_j) \textrm { for } e_j \in E, e_j \ne e_i\rbrace $ and sort them into non-decreasing order $d_{j_0} \le d_{j_1} \le \dots \le d_{j_{n-2}}$. We consider the ordering to be unique as the probability of embedding cosine distances being equal is very small. The set of embedded $k$-nearest neighbours of $s_i$ is defined as Finally, the $k$-nearest neighbours accuracy for $s_i$ is given by A good embedding model should cluster the embeddings for similar inputs in the embedding space. Hence, the 5-NN test can be seen as an indicator of how cohesive the embedding space is. The results are shown in Table TABREF22. The differences in the number of groups in each language are due to different availabilities of sentences and sentence-types in the Tatoeba dataset. The high nearest neighbours accuracy indicates that syntax information was successfully captured by the embeddings. Table TABREF22 also shows that the syntactic information of multiple languages was captured by a single embedding model. Experiments ::: Quality of Syntactic Embeddings ::: Language Model A number of recent works BIBREF7, BIBREF8 have probed language models to determine if they contain syntactic information. We applied the same nearest neighbours experiment (with the same test sets) on a number of existing language models: Universal Sentence Encoder (USE) BIBREF17, LASER, and BERT. For USE we used models available from TensorHub. For LASER we used models and created embeddings from the official repository . For BERT, we report the results using max (BERT$_{max}$) and average-pooling (BERT$_{avg}$), obtained from the BERT embedding toolkit with the multilingual cased model (104 languages, 12-layers, 768-hidden units, 12-heads), and `pooled-output' (BERT$_{output}$) from the TensorHub version of the model with the same parameters. We computed the nearest neighbours experiment for all languages in the training data for the above models. The results are shown in Table TABREF27. The results show that general purpose language models do capture syntax information, which varies greatly across languages and models. The nearest neighbours accuracy of our syntactic embeddings in Table TABREF22 significantly outperforms the general purpose language models. Arguably these language models were trained using different training data. However, this is a reasonable comparison because many real-world applications rely on released pre-trained language models for syntactically related information. Hence, we want to show that we can use much smaller models trained with direct supervision, to obtain syntactic embeddings with similar or better quality. Nonetheless, the training method used in this work can certainly be extended to architectures similar to BERT or USE. Experiments ::: Functional Dissimilarity The experiments in the previous section showed that the proposed syntactic embeddings formed cohesive clusters in the embedding space, based on UPOS sequence similarities. We further studied the spatial relationships within the embeddings. Word2Vec BIBREF18 examined spatial relationships between embeddings and compared them to the semantic relationships between words. Operations on vectors in the embedding space such as $King - Man + Woman = Queen$ created vectors that also correlated with similar operations in semantics. Such semantic comparisons do not directly translate to syntactic embeddings. However, syntax information shifts with edits on POS sequences. Hence, we examined the spatial relationships between syntactic embeddings by comparing their cosine similarities with the edit distances between UPOS sequence pairs. Given $n$ UPOS sequences $U = \lbrace u_0,...,u_{n-1}\rbrace $, we compute the matrix $L \in \mathbb {R}^{n \times n}$, where $l_{ij} = l(u_i, u_j)$, the complement of the normalized Levenshtein distance between $u_i$ and $u_j$. Given the set of embedding vectors $\lbrace e_0,...,e_{n-1}\rbrace $ where $e_i$ is the embedding for sentence $s_i$, we also compute $D \in \mathbb {R}^{n \times n}$, where $d_{ij} = d(e_i, e_j)$. We further normalize $d_{ij}$ to be within $[0, 1]$ by min-max normalization to obtain $\hat{D} = \operatorname{minMax}(D)$. Following BIBREF19, we define the functional dissimilarity score by Intuitively, UPOS sequences that are similar (smaller edit distance) should be embedded close to each other in the embedding space, and embeddings that are further away should have dissimilar UPOS sequences. Hence, the functional dissimilarity score is low if the relative changes in UPOS sequences are reflected in the embedding space. The score is high if such changes are not reflected. The functional dissimilarity score was computed using sentences from the test set in CoNLL 2017 Universal Dependencies task BIBREF20 for the relevant languages with the provided UPOS sequences. Furthermore, none of the evaluated models, including the proposed method, were trained with CoNLL2017 data. We compared the functional dissimilarity scores of our syntactic representations against embeddings obtained from BERT and LASER, to further demonstrate that simple network structures with explicit supervision may be sufficient to capture syntactic structure. All the results are shown in Table TABREF29. We only show the best (lowest) results from BERT. Experiments ::: Transfer Performance of Syntactic Embeddings Many NLP tasks utilize POS as features, but human annotated POS sequences are difficult and expensive to obtain. Thus, it is important to know if we can learn sentences-level syntactic embeddings for low-sources languages without treebanks. We performed zero-shot transfer of the syntactic embeddings for French, Portuguese and Indonesian. French and Portuguese are simulated low-resource languages, while Indonesian is a true low-resource language. We reported the 1-NN and 5-NN accuracies for all languages using the same evaluation setting as described in the previous section. The results are shown in Table TABREF31 (top). We also fine-tuned the learned syntactic embeddings on the low-resource language for a varying number of training data and languages. The results are shown in Table TABREF31 (bottom). In this table, the low-resource language is denoted as the `source', while the high-resource language(s) is denoted as the `target'. With this training method, no UPOS tag information was provided to the model for the `source' languages, where supervising information comes solely from parallel sentences and UPOS tags in high-resource languages. The results show that for a new language (French and Portuguese) that is similar to the family of pre-training languages, there are two ways to achieve higher 1-NN accuracy. If the number of unique sentences in the new language is small, accuracy can be improved by increasing the size of the parallel corpora used to fine-tune. If only one parallel corpus is available, accuracy can be improved by increasing the number of unique sentence-pairs used to fine-tune. For a new language that is dissimilar to the family of pre-training languages, e.g. Indonesian in Table TABREF31, the above methods only improved nearest neighbours accuracy slightly. This may be caused by differing data distribution or by tagger inaccuracies. The results for Indonesian do indicate that some syntactic structure can be learned by using our method, even for a dissimilar language. A future direction is to conduct a rigorous analysis of transfer learning between languages from the same versus different language families. Conclusion We examined the possibility of creating syntactic embeddings by using a multilingual method based on sequence-to-sequence models. In contrast to prior work, our method only requires parallel corpora and UPOS tags in the target language. We studied the quality of learned embeddings by examining nearest neighbours in the embedding space and investigating their functional dissimilarity. These results were compared against recent state-of-the-art language models. We also showed that pre-training with a parallel corpus allowed the syntactic embeddings to be transferred to low-resource languages via few-shot fine-tuning. Our evaluations indicated that syntactic structure can be learnt by using simple network architectures and explicit supervision. Future directions include improving the transfer performance for low-resource languages, disentangling semantic and syntactic embeddings, and analyzing the effect of transfer learning between languages belong to the same versus different language families.
Yes
56c6ff65c64ca85951fdea54d6b096f28393c128
56c6ff65c64ca85951fdea54d6b096f28393c128_0
Q: Do they evaluate on downstream tasks? Text: Introduction Recent success in language modelling and representation learning have largely focused on learning the semantic structures of language BIBREF0. Syntactic information, such as part-of-speech (POS) sequences, is an essential part of language and can be important for tasks such as authorship identification, writing-style analysis, translation, etc. Methods that learn syntactic representations have received relatively less attention, with focus mostly on evaluating the semantic information contained in representations produced by language models. Multilingual embeddings have been shown to achieve top performance in many downstream tasks BIBREF1, BIBREF2. By training over large corpora, these models have shown to generalize to similar but unseen contexts. However, words contain multiple types of information: semantic, syntactic, and morphologic. Therefore, it is possible that syntactically different passages have similar embeddings due to their semantic properties. On tasks like the ones mentioned above, discriminating using patterns that include semantic information may result in poor generalization, specially when datasets are not sufficiently representative. In this work, we study methods that learn sentence-level embeddings that explicitly capture syntactic information. We focus on variations of sequence-to-sequence models BIBREF3, trained using a multilingual corpus with universal part-of-speech (UPOS) tags for the target languages only. By using target-language UPOS tags in the training process, we are able to learn sentence-level embeddings for source languages that lack UPOS tagging data. This property can be leveraged to learn syntactic embeddings for low-resource languages. Our main contributions are: to study whether sentence-level syntactic embeddings can be learned efficiently, to evaluate the structure of the learned embedding space, and to explore the potential of learning syntactic embeddings for low-resource languages. We evaluate the syntactic structure of sentence-level embeddings by performing nearest-neighbour (NN) search in the embedding space. We show that these embeddings exhibit properties that correlate with similarities between UPOS sequences of the original sentences. We also evaluate the embeddings produced by language models such as BERT BIBREF0 and show that they contain some syntactic information. We further explore our method in the few-shot setting for low-resource source languages without large, high quality treebank datasets. We show its transfer-learning capabilities on artificial and real low-resource languages. Lastly, we show that training on multilingual parallel corpora significantly improves the learned syntactic embeddings. This is similar to existing results for models trained (or pre-trained) on multiple languages BIBREF4, BIBREF2 for downstream tasks BIBREF5. Related Work Training semantic embeddings based on multilingual data was studied by MUSE BIBREF1 and LASER BIBREF2 at the word and sentence levels respectively. Multi-task training for disentangling semantic and syntactic information was studied in BIBREF6. This work also used a nearest neighbour method to evaluate the syntactic properties of models, though their focus was on disentanglement rather than embedding quality. The syntactic content of language models was studied by examining syntax trees BIBREF7, subject-object agreement BIBREF8, and evaluation on syntactically altered datasets BIBREF9, BIBREF10. These works did not examine multilingual models. Distant supervision BIBREF11, BIBREF12 has been used to learn POS taggers for low-resource languages using cross-lingual corpora. The goal of these works is to learn word-level POS tags, rather than sentence-level syntactic embeddings. Furthermore, our method does not require explicit POS sequences for the low-resource language, which results in a simpler training process than distant supervision. Method ::: Architecture We iterated upon the model architecture proposed in LASER BIBREF2. The model consists of a two-layer Bi-directional LSTM (BiLSTM) encoder and a single-layer LSTM decoder. The encoder is language agnostic as no language context is provided as input. In contrast to LASER, we use the concatenation of last hidden and cell states of the encoder to initialize the decoder through a linear projection. At each time-step, the decoder takes an embedding of the previous POS target concatenated with an embedding representing the language context, as well as a max-pooling over encoder outputs. Figure FIGREF2 shows the architecture of the proposed model. The input embeddings for the encoder were created using a jointly learned Byte-Pair-Encoding (BPE) vocabulary BIBREF13 for all languages by using sentencepiece. Method ::: Training Training was performed using an aligned parallel corpus. Given a source-target aligned sentence pair (as in machine translation), we: Convert the sentence in the source language into BPE Look up embeddings for BPE as the input to the encoder Convert the sentence in a target language into UPOS tags, in the tagset of the target language. Use the UPOS tags in step 3 as the targets for a cross-entropy loss. Hence, the task is to predict the UPOS sequence computed from the translated input sentence. The UPOS targets were obtained using StandfordNLP BIBREF14 . Dropout with a drop probability of 0.2 was applied to the encoder. The Adam optimizer BIBREF15 was used with a constant learning rate of $0.0001$. Table TABREF4 shows a full list of the hyperparameters used in the training procedure. Method ::: Dataset To create our training dataset, we followed an approach similar to LASER. The dataset contains 6 languages: English, Spanish, German, Dutch, Korean and Chinese Mandarin. These languages use 3 different scripts, 2 different language orderings, and belong to 4 language families. English, Spanish, German, and Dutch use a Latin-based script. However, Spanish is a Romantic language while the others are Germanic languages. Chinese Mandarin and Korean are included because they use non-latin based scripts and originate from language families distinct from the other languages. Although the grammatical rules vary between the selected languages, they share a number of key characteristics such as the Subject-Verb-Object ordering, except Korean (which mainly follows the Subject-Object-Verb order). We hope to extend our work to other languages with different scripts and sentence structures, such as Arabic, Japanese, Hindi, etc. in the future. The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16. They were chosen for their high availability in multiple languages. Statistics of the final training dataset are shown in Table TABREF14. Rows and columns correspond to source and target languages respectively. Method ::: Dataset ::: Tatoeba Tatoeba is a freely available crowd-annotated dataset for language learning. We selected all sentences in English, Spanish, German, Dutch, and Korean. We pruned the dataset to contain only sentences with at least one translation to any of the other languages. The final training set contains 1.36M translation sentence pairs from this source. Method ::: Dataset ::: OpenSubtitles We augmented our training data by using the 2018 OpenSubtitles dataset. OpenSubtitles is a publicly available dataset based on movie subtitles BIBREF16. We created our training dataset from selected aligned subtitles by taking the unique translations among the first million sentences, for each aligned parallel corpus. We further processed the data by pruning to remove samples with less than 3 words, multiple sentences, or incomplete sentences. The resulting dataset contains 1.9M translation sentence pairs from this source. Experiments We aim to address the following questions: Can syntactic structures be embedded? For multiple languages? Can parallel corpora be used to learn syntactic structure for low-resource languages? Does multilingual pre-training improve syntactic embeddings? We address question 1 in Secs. SECREF20 and SECREF28 by evaluating the quality of syntactic and semantic embeddings in several ways. Questions 2 and 3 are addressed in Sec. SECREF30 by studying the transfer-learning performance of syntactic embeddings. Experiments ::: Quality of Syntactic Embeddings We studied the quality of the learned syntactic embeddings by using a nearest-neighbour (NN) method. First, we calculated the UPOS sequence of all sentences in the Tatoeba dataset by using a tagger. Sentences were then assigned to distinct groups according to their UPOS sequence, i.e., all sentences belonging to the same group had the same UPOS sequence. For all languages except Korean, a held-out test set was created by randomly sampling groups that contained at least 6 sentences. For Korean, all groups containing at least 6 sentences were kept as the test set since the dataset is small. During evaluation, we applied max-pooling to the outputs of the encoder to obtain the syntactic embeddings of the held-out sentences. For each syntactic embedding, we find its top nearest neighbour (1-NN) and top-5 nearest neighbours (5-NN) in the embedding space for the held-out sentences, based on their UPOS group. Given $n$ sentences $S = \lbrace s_0, \dots , s_{n-1}\rbrace $ and their embeddings $E = \lbrace e_0, \dots , e_{n-1}\rbrace $, for each $s_i$ there is a set of $k$ gold nearest neighbours $G(i, k) = \lbrace g_0, \dots , g_{k-1}\rbrace $, $G(i, k) \subseteq S$ such that $d(s_i, g) \le d(s_i, s) \textrm { for all } g \in G(i, k) \textrm { and } s \in S \setminus G(i, k)$, where $d(\cdot , \cdot )$ is the cosine distance. Given embedding $e_i$, we calculate cosine distances $\lbrace d(e_i, e_j) \textrm { for } e_j \in E, e_j \ne e_i\rbrace $ and sort them into non-decreasing order $d_{j_0} \le d_{j_1} \le \dots \le d_{j_{n-2}}$. We consider the ordering to be unique as the probability of embedding cosine distances being equal is very small. The set of embedded $k$-nearest neighbours of $s_i$ is defined as Finally, the $k$-nearest neighbours accuracy for $s_i$ is given by A good embedding model should cluster the embeddings for similar inputs in the embedding space. Hence, the 5-NN test can be seen as an indicator of how cohesive the embedding space is. The results are shown in Table TABREF22. The differences in the number of groups in each language are due to different availabilities of sentences and sentence-types in the Tatoeba dataset. The high nearest neighbours accuracy indicates that syntax information was successfully captured by the embeddings. Table TABREF22 also shows that the syntactic information of multiple languages was captured by a single embedding model. Experiments ::: Quality of Syntactic Embeddings ::: Language Model A number of recent works BIBREF7, BIBREF8 have probed language models to determine if they contain syntactic information. We applied the same nearest neighbours experiment (with the same test sets) on a number of existing language models: Universal Sentence Encoder (USE) BIBREF17, LASER, and BERT. For USE we used models available from TensorHub. For LASER we used models and created embeddings from the official repository . For BERT, we report the results using max (BERT$_{max}$) and average-pooling (BERT$_{avg}$), obtained from the BERT embedding toolkit with the multilingual cased model (104 languages, 12-layers, 768-hidden units, 12-heads), and `pooled-output' (BERT$_{output}$) from the TensorHub version of the model with the same parameters. We computed the nearest neighbours experiment for all languages in the training data for the above models. The results are shown in Table TABREF27. The results show that general purpose language models do capture syntax information, which varies greatly across languages and models. The nearest neighbours accuracy of our syntactic embeddings in Table TABREF22 significantly outperforms the general purpose language models. Arguably these language models were trained using different training data. However, this is a reasonable comparison because many real-world applications rely on released pre-trained language models for syntactically related information. Hence, we want to show that we can use much smaller models trained with direct supervision, to obtain syntactic embeddings with similar or better quality. Nonetheless, the training method used in this work can certainly be extended to architectures similar to BERT or USE. Experiments ::: Functional Dissimilarity The experiments in the previous section showed that the proposed syntactic embeddings formed cohesive clusters in the embedding space, based on UPOS sequence similarities. We further studied the spatial relationships within the embeddings. Word2Vec BIBREF18 examined spatial relationships between embeddings and compared them to the semantic relationships between words. Operations on vectors in the embedding space such as $King - Man + Woman = Queen$ created vectors that also correlated with similar operations in semantics. Such semantic comparisons do not directly translate to syntactic embeddings. However, syntax information shifts with edits on POS sequences. Hence, we examined the spatial relationships between syntactic embeddings by comparing their cosine similarities with the edit distances between UPOS sequence pairs. Given $n$ UPOS sequences $U = \lbrace u_0,...,u_{n-1}\rbrace $, we compute the matrix $L \in \mathbb {R}^{n \times n}$, where $l_{ij} = l(u_i, u_j)$, the complement of the normalized Levenshtein distance between $u_i$ and $u_j$. Given the set of embedding vectors $\lbrace e_0,...,e_{n-1}\rbrace $ where $e_i$ is the embedding for sentence $s_i$, we also compute $D \in \mathbb {R}^{n \times n}$, where $d_{ij} = d(e_i, e_j)$. We further normalize $d_{ij}$ to be within $[0, 1]$ by min-max normalization to obtain $\hat{D} = \operatorname{minMax}(D)$. Following BIBREF19, we define the functional dissimilarity score by Intuitively, UPOS sequences that are similar (smaller edit distance) should be embedded close to each other in the embedding space, and embeddings that are further away should have dissimilar UPOS sequences. Hence, the functional dissimilarity score is low if the relative changes in UPOS sequences are reflected in the embedding space. The score is high if such changes are not reflected. The functional dissimilarity score was computed using sentences from the test set in CoNLL 2017 Universal Dependencies task BIBREF20 for the relevant languages with the provided UPOS sequences. Furthermore, none of the evaluated models, including the proposed method, were trained with CoNLL2017 data. We compared the functional dissimilarity scores of our syntactic representations against embeddings obtained from BERT and LASER, to further demonstrate that simple network structures with explicit supervision may be sufficient to capture syntactic structure. All the results are shown in Table TABREF29. We only show the best (lowest) results from BERT. Experiments ::: Transfer Performance of Syntactic Embeddings Many NLP tasks utilize POS as features, but human annotated POS sequences are difficult and expensive to obtain. Thus, it is important to know if we can learn sentences-level syntactic embeddings for low-sources languages without treebanks. We performed zero-shot transfer of the syntactic embeddings for French, Portuguese and Indonesian. French and Portuguese are simulated low-resource languages, while Indonesian is a true low-resource language. We reported the 1-NN and 5-NN accuracies for all languages using the same evaluation setting as described in the previous section. The results are shown in Table TABREF31 (top). We also fine-tuned the learned syntactic embeddings on the low-resource language for a varying number of training data and languages. The results are shown in Table TABREF31 (bottom). In this table, the low-resource language is denoted as the `source', while the high-resource language(s) is denoted as the `target'. With this training method, no UPOS tag information was provided to the model for the `source' languages, where supervising information comes solely from parallel sentences and UPOS tags in high-resource languages. The results show that for a new language (French and Portuguese) that is similar to the family of pre-training languages, there are two ways to achieve higher 1-NN accuracy. If the number of unique sentences in the new language is small, accuracy can be improved by increasing the size of the parallel corpora used to fine-tune. If only one parallel corpus is available, accuracy can be improved by increasing the number of unique sentence-pairs used to fine-tune. For a new language that is dissimilar to the family of pre-training languages, e.g. Indonesian in Table TABREF31, the above methods only improved nearest neighbours accuracy slightly. This may be caused by differing data distribution or by tagger inaccuracies. The results for Indonesian do indicate that some syntactic structure can be learned by using our method, even for a dissimilar language. A future direction is to conduct a rigorous analysis of transfer learning between languages from the same versus different language families. Conclusion We examined the possibility of creating syntactic embeddings by using a multilingual method based on sequence-to-sequence models. In contrast to prior work, our method only requires parallel corpora and UPOS tags in the target language. We studied the quality of learned embeddings by examining nearest neighbours in the embedding space and investigating their functional dissimilarity. These results were compared against recent state-of-the-art language models. We also showed that pre-training with a parallel corpus allowed the syntactic embeddings to be transferred to low-resource languages via few-shot fine-tuning. Our evaluations indicated that syntactic structure can be learnt by using simple network architectures and explicit supervision. Future directions include improving the transfer performance for low-resource languages, disentangling semantic and syntactic embeddings, and analyzing the effect of transfer learning between languages belong to the same versus different language families.
Yes
356e462f7966e30665a387ed7a9ad2e830479da6
356e462f7966e30665a387ed7a9ad2e830479da6_0
Q: Which corpus do they use? Text: Introduction Recent success in language modelling and representation learning have largely focused on learning the semantic structures of language BIBREF0. Syntactic information, such as part-of-speech (POS) sequences, is an essential part of language and can be important for tasks such as authorship identification, writing-style analysis, translation, etc. Methods that learn syntactic representations have received relatively less attention, with focus mostly on evaluating the semantic information contained in representations produced by language models. Multilingual embeddings have been shown to achieve top performance in many downstream tasks BIBREF1, BIBREF2. By training over large corpora, these models have shown to generalize to similar but unseen contexts. However, words contain multiple types of information: semantic, syntactic, and morphologic. Therefore, it is possible that syntactically different passages have similar embeddings due to their semantic properties. On tasks like the ones mentioned above, discriminating using patterns that include semantic information may result in poor generalization, specially when datasets are not sufficiently representative. In this work, we study methods that learn sentence-level embeddings that explicitly capture syntactic information. We focus on variations of sequence-to-sequence models BIBREF3, trained using a multilingual corpus with universal part-of-speech (UPOS) tags for the target languages only. By using target-language UPOS tags in the training process, we are able to learn sentence-level embeddings for source languages that lack UPOS tagging data. This property can be leveraged to learn syntactic embeddings for low-resource languages. Our main contributions are: to study whether sentence-level syntactic embeddings can be learned efficiently, to evaluate the structure of the learned embedding space, and to explore the potential of learning syntactic embeddings for low-resource languages. We evaluate the syntactic structure of sentence-level embeddings by performing nearest-neighbour (NN) search in the embedding space. We show that these embeddings exhibit properties that correlate with similarities between UPOS sequences of the original sentences. We also evaluate the embeddings produced by language models such as BERT BIBREF0 and show that they contain some syntactic information. We further explore our method in the few-shot setting for low-resource source languages without large, high quality treebank datasets. We show its transfer-learning capabilities on artificial and real low-resource languages. Lastly, we show that training on multilingual parallel corpora significantly improves the learned syntactic embeddings. This is similar to existing results for models trained (or pre-trained) on multiple languages BIBREF4, BIBREF2 for downstream tasks BIBREF5. Related Work Training semantic embeddings based on multilingual data was studied by MUSE BIBREF1 and LASER BIBREF2 at the word and sentence levels respectively. Multi-task training for disentangling semantic and syntactic information was studied in BIBREF6. This work also used a nearest neighbour method to evaluate the syntactic properties of models, though their focus was on disentanglement rather than embedding quality. The syntactic content of language models was studied by examining syntax trees BIBREF7, subject-object agreement BIBREF8, and evaluation on syntactically altered datasets BIBREF9, BIBREF10. These works did not examine multilingual models. Distant supervision BIBREF11, BIBREF12 has been used to learn POS taggers for low-resource languages using cross-lingual corpora. The goal of these works is to learn word-level POS tags, rather than sentence-level syntactic embeddings. Furthermore, our method does not require explicit POS sequences for the low-resource language, which results in a simpler training process than distant supervision. Method ::: Architecture We iterated upon the model architecture proposed in LASER BIBREF2. The model consists of a two-layer Bi-directional LSTM (BiLSTM) encoder and a single-layer LSTM decoder. The encoder is language agnostic as no language context is provided as input. In contrast to LASER, we use the concatenation of last hidden and cell states of the encoder to initialize the decoder through a linear projection. At each time-step, the decoder takes an embedding of the previous POS target concatenated with an embedding representing the language context, as well as a max-pooling over encoder outputs. Figure FIGREF2 shows the architecture of the proposed model. The input embeddings for the encoder were created using a jointly learned Byte-Pair-Encoding (BPE) vocabulary BIBREF13 for all languages by using sentencepiece. Method ::: Training Training was performed using an aligned parallel corpus. Given a source-target aligned sentence pair (as in machine translation), we: Convert the sentence in the source language into BPE Look up embeddings for BPE as the input to the encoder Convert the sentence in a target language into UPOS tags, in the tagset of the target language. Use the UPOS tags in step 3 as the targets for a cross-entropy loss. Hence, the task is to predict the UPOS sequence computed from the translated input sentence. The UPOS targets were obtained using StandfordNLP BIBREF14 . Dropout with a drop probability of 0.2 was applied to the encoder. The Adam optimizer BIBREF15 was used with a constant learning rate of $0.0001$. Table TABREF4 shows a full list of the hyperparameters used in the training procedure. Method ::: Dataset To create our training dataset, we followed an approach similar to LASER. The dataset contains 6 languages: English, Spanish, German, Dutch, Korean and Chinese Mandarin. These languages use 3 different scripts, 2 different language orderings, and belong to 4 language families. English, Spanish, German, and Dutch use a Latin-based script. However, Spanish is a Romantic language while the others are Germanic languages. Chinese Mandarin and Korean are included because they use non-latin based scripts and originate from language families distinct from the other languages. Although the grammatical rules vary between the selected languages, they share a number of key characteristics such as the Subject-Verb-Object ordering, except Korean (which mainly follows the Subject-Object-Verb order). We hope to extend our work to other languages with different scripts and sentence structures, such as Arabic, Japanese, Hindi, etc. in the future. The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16. They were chosen for their high availability in multiple languages. Statistics of the final training dataset are shown in Table TABREF14. Rows and columns correspond to source and target languages respectively. Method ::: Dataset ::: Tatoeba Tatoeba is a freely available crowd-annotated dataset for language learning. We selected all sentences in English, Spanish, German, Dutch, and Korean. We pruned the dataset to contain only sentences with at least one translation to any of the other languages. The final training set contains 1.36M translation sentence pairs from this source. Method ::: Dataset ::: OpenSubtitles We augmented our training data by using the 2018 OpenSubtitles dataset. OpenSubtitles is a publicly available dataset based on movie subtitles BIBREF16. We created our training dataset from selected aligned subtitles by taking the unique translations among the first million sentences, for each aligned parallel corpus. We further processed the data by pruning to remove samples with less than 3 words, multiple sentences, or incomplete sentences. The resulting dataset contains 1.9M translation sentence pairs from this source. Experiments We aim to address the following questions: Can syntactic structures be embedded? For multiple languages? Can parallel corpora be used to learn syntactic structure for low-resource languages? Does multilingual pre-training improve syntactic embeddings? We address question 1 in Secs. SECREF20 and SECREF28 by evaluating the quality of syntactic and semantic embeddings in several ways. Questions 2 and 3 are addressed in Sec. SECREF30 by studying the transfer-learning performance of syntactic embeddings. Experiments ::: Quality of Syntactic Embeddings We studied the quality of the learned syntactic embeddings by using a nearest-neighbour (NN) method. First, we calculated the UPOS sequence of all sentences in the Tatoeba dataset by using a tagger. Sentences were then assigned to distinct groups according to their UPOS sequence, i.e., all sentences belonging to the same group had the same UPOS sequence. For all languages except Korean, a held-out test set was created by randomly sampling groups that contained at least 6 sentences. For Korean, all groups containing at least 6 sentences were kept as the test set since the dataset is small. During evaluation, we applied max-pooling to the outputs of the encoder to obtain the syntactic embeddings of the held-out sentences. For each syntactic embedding, we find its top nearest neighbour (1-NN) and top-5 nearest neighbours (5-NN) in the embedding space for the held-out sentences, based on their UPOS group. Given $n$ sentences $S = \lbrace s_0, \dots , s_{n-1}\rbrace $ and their embeddings $E = \lbrace e_0, \dots , e_{n-1}\rbrace $, for each $s_i$ there is a set of $k$ gold nearest neighbours $G(i, k) = \lbrace g_0, \dots , g_{k-1}\rbrace $, $G(i, k) \subseteq S$ such that $d(s_i, g) \le d(s_i, s) \textrm { for all } g \in G(i, k) \textrm { and } s \in S \setminus G(i, k)$, where $d(\cdot , \cdot )$ is the cosine distance. Given embedding $e_i$, we calculate cosine distances $\lbrace d(e_i, e_j) \textrm { for } e_j \in E, e_j \ne e_i\rbrace $ and sort them into non-decreasing order $d_{j_0} \le d_{j_1} \le \dots \le d_{j_{n-2}}$. We consider the ordering to be unique as the probability of embedding cosine distances being equal is very small. The set of embedded $k$-nearest neighbours of $s_i$ is defined as Finally, the $k$-nearest neighbours accuracy for $s_i$ is given by A good embedding model should cluster the embeddings for similar inputs in the embedding space. Hence, the 5-NN test can be seen as an indicator of how cohesive the embedding space is. The results are shown in Table TABREF22. The differences in the number of groups in each language are due to different availabilities of sentences and sentence-types in the Tatoeba dataset. The high nearest neighbours accuracy indicates that syntax information was successfully captured by the embeddings. Table TABREF22 also shows that the syntactic information of multiple languages was captured by a single embedding model. Experiments ::: Quality of Syntactic Embeddings ::: Language Model A number of recent works BIBREF7, BIBREF8 have probed language models to determine if they contain syntactic information. We applied the same nearest neighbours experiment (with the same test sets) on a number of existing language models: Universal Sentence Encoder (USE) BIBREF17, LASER, and BERT. For USE we used models available from TensorHub. For LASER we used models and created embeddings from the official repository . For BERT, we report the results using max (BERT$_{max}$) and average-pooling (BERT$_{avg}$), obtained from the BERT embedding toolkit with the multilingual cased model (104 languages, 12-layers, 768-hidden units, 12-heads), and `pooled-output' (BERT$_{output}$) from the TensorHub version of the model with the same parameters. We computed the nearest neighbours experiment for all languages in the training data for the above models. The results are shown in Table TABREF27. The results show that general purpose language models do capture syntax information, which varies greatly across languages and models. The nearest neighbours accuracy of our syntactic embeddings in Table TABREF22 significantly outperforms the general purpose language models. Arguably these language models were trained using different training data. However, this is a reasonable comparison because many real-world applications rely on released pre-trained language models for syntactically related information. Hence, we want to show that we can use much smaller models trained with direct supervision, to obtain syntactic embeddings with similar or better quality. Nonetheless, the training method used in this work can certainly be extended to architectures similar to BERT or USE. Experiments ::: Functional Dissimilarity The experiments in the previous section showed that the proposed syntactic embeddings formed cohesive clusters in the embedding space, based on UPOS sequence similarities. We further studied the spatial relationships within the embeddings. Word2Vec BIBREF18 examined spatial relationships between embeddings and compared them to the semantic relationships between words. Operations on vectors in the embedding space such as $King - Man + Woman = Queen$ created vectors that also correlated with similar operations in semantics. Such semantic comparisons do not directly translate to syntactic embeddings. However, syntax information shifts with edits on POS sequences. Hence, we examined the spatial relationships between syntactic embeddings by comparing their cosine similarities with the edit distances between UPOS sequence pairs. Given $n$ UPOS sequences $U = \lbrace u_0,...,u_{n-1}\rbrace $, we compute the matrix $L \in \mathbb {R}^{n \times n}$, where $l_{ij} = l(u_i, u_j)$, the complement of the normalized Levenshtein distance between $u_i$ and $u_j$. Given the set of embedding vectors $\lbrace e_0,...,e_{n-1}\rbrace $ where $e_i$ is the embedding for sentence $s_i$, we also compute $D \in \mathbb {R}^{n \times n}$, where $d_{ij} = d(e_i, e_j)$. We further normalize $d_{ij}$ to be within $[0, 1]$ by min-max normalization to obtain $\hat{D} = \operatorname{minMax}(D)$. Following BIBREF19, we define the functional dissimilarity score by Intuitively, UPOS sequences that are similar (smaller edit distance) should be embedded close to each other in the embedding space, and embeddings that are further away should have dissimilar UPOS sequences. Hence, the functional dissimilarity score is low if the relative changes in UPOS sequences are reflected in the embedding space. The score is high if such changes are not reflected. The functional dissimilarity score was computed using sentences from the test set in CoNLL 2017 Universal Dependencies task BIBREF20 for the relevant languages with the provided UPOS sequences. Furthermore, none of the evaluated models, including the proposed method, were trained with CoNLL2017 data. We compared the functional dissimilarity scores of our syntactic representations against embeddings obtained from BERT and LASER, to further demonstrate that simple network structures with explicit supervision may be sufficient to capture syntactic structure. All the results are shown in Table TABREF29. We only show the best (lowest) results from BERT. Experiments ::: Transfer Performance of Syntactic Embeddings Many NLP tasks utilize POS as features, but human annotated POS sequences are difficult and expensive to obtain. Thus, it is important to know if we can learn sentences-level syntactic embeddings for low-sources languages without treebanks. We performed zero-shot transfer of the syntactic embeddings for French, Portuguese and Indonesian. French and Portuguese are simulated low-resource languages, while Indonesian is a true low-resource language. We reported the 1-NN and 5-NN accuracies for all languages using the same evaluation setting as described in the previous section. The results are shown in Table TABREF31 (top). We also fine-tuned the learned syntactic embeddings on the low-resource language for a varying number of training data and languages. The results are shown in Table TABREF31 (bottom). In this table, the low-resource language is denoted as the `source', while the high-resource language(s) is denoted as the `target'. With this training method, no UPOS tag information was provided to the model for the `source' languages, where supervising information comes solely from parallel sentences and UPOS tags in high-resource languages. The results show that for a new language (French and Portuguese) that is similar to the family of pre-training languages, there are two ways to achieve higher 1-NN accuracy. If the number of unique sentences in the new language is small, accuracy can be improved by increasing the size of the parallel corpora used to fine-tune. If only one parallel corpus is available, accuracy can be improved by increasing the number of unique sentence-pairs used to fine-tune. For a new language that is dissimilar to the family of pre-training languages, e.g. Indonesian in Table TABREF31, the above methods only improved nearest neighbours accuracy slightly. This may be caused by differing data distribution or by tagger inaccuracies. The results for Indonesian do indicate that some syntactic structure can be learned by using our method, even for a dissimilar language. A future direction is to conduct a rigorous analysis of transfer learning between languages from the same versus different language families. Conclusion We examined the possibility of creating syntactic embeddings by using a multilingual method based on sequence-to-sequence models. In contrast to prior work, our method only requires parallel corpora and UPOS tags in the target language. We studied the quality of learned embeddings by examining nearest neighbours in the embedding space and investigating their functional dissimilarity. These results were compared against recent state-of-the-art language models. We also showed that pre-training with a parallel corpus allowed the syntactic embeddings to be transferred to low-resource languages via few-shot fine-tuning. Our evaluations indicated that syntactic structure can be learnt by using simple network architectures and explicit supervision. Future directions include improving the transfer performance for low-resource languages, disentangling semantic and syntactic embeddings, and analyzing the effect of transfer learning between languages belong to the same versus different language families.
The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16.
572458399a45fd392c3a4e07ce26dcff2ad5a07d
572458399a45fd392c3a4e07ce26dcff2ad5a07d_0
Q: How much more accurate is the model than the baseline? Text: Introduction Question Answering (QA) modules play particularly important roles in recent dialog-based Natural Language Understanding (NLU) systems, such as Apple's Siri and Amazon's Echo. Users chat with AI systems in natural language to get the answers they are seeking. QA systems can deal with two types of question: factoid and non-factoid ones. The former sort asks, for instance, for the name of a thing or person such as “What/Who is $X$?”. The latter sort includes more diverse questions that cannot be answered by a short fact. For instance, users may ask for advice on how to make a long-distance relationship work well or for opinions on public issues. Significant progress has been made in answering factoid questions BIBREF0, BIBREF1; however, answering non-factoid questions remains a challenge for QA modules. Long short term memory (LSTM) sequence-to-sequence models BIBREF2, BIBREF3, BIBREF4 try to generate short replies to the short utterances often seen in chat systems. Evaluations have indicated that these models have the possibility of supporting simple forms of general knowledge QA, e.g. “Is the sky blue or black?”, since they learn commonly occurring sentences in the training corpus. Recent machine reading comprehension (MRC) methods BIBREF5, BIBREF6 try to return a single short answer to a question by extracting answer spans from the provided passages. Unfortunately, they may generate unsatisfying answers to regular non-factoid questions because they can easily become confused when learning several different long answers to the same non-factoid question, as pointed out by BIBREF7, BIBREF8. This paper tackles a new problem: conclusion-supplement answer generation for non-factoid questions. Here, the conclusion consists of sentences that directly answer the question, while the supplement consists of information supporting the conclusion, e.g., reasons or examples. Such conclusion-supplement answers are important for helping questioners decide their actions, especially in NLU. As described in BIBREF9, users prefer a supporting supplement before accepting an instruction (i.e., a conclusion). Good debates also include claims (i.e., conclusions) about a topic and supplements to support them that will allow users to reach decisions BIBREF10. The following example helps to explain how conclusion-supplement answers are useful to users: “Does separation by a long distance ruin love?” Current methods tend to answer this question with short and generic replies, such as, “Distance cannot ruin true love”. The questioner, however, is not likely to be satisfied with such a trite answer and will want to know how the conclusion was reached. If a supplemental statement like “separations certainly test your love” is presented with the conclusion, the questioner is more likely to accept the answer and use it to reach a decision. Furthermore, there may be multiple answers to a non-factoid question. For example, the following answer is also a potential answer to the question: “distance ruins most relationships. You should keep in contact with him”. The current methods, however, have difficulty generating such conclusion-supplement answers because they can become easily confused when they try to learn several different and long answers to a non-factoid question. To address the above problem, we propose a novel architecture, called the ensemble network. It is an extension of existing encoder-decoder models, and it generates two types of decoder output sequence, conclusion and supplement. It uses two viewpoints for selecting the conclusion statements and supplementary statements. (Viewpoint 1) The context present in the conclusion decoder's output is linked to supplementary-decoder output states on the basis of an attention mechanism. Thus, the context of the conclusion sequence directly impacts the decoder states of the supplement sequences. This, as a result, generates natural-sounding supplementary sequences. (Viewpoint 2) The closeness of the question sequence and conclusion (or supplement) sequence as well as the closeness of the question sequence with the combination of conclusion and supplement sequences is considered. By assessing the closeness at the sentence level and sentence-combination level in addition to at the word level, it can generate answers that include good supplementary sentences following the context of the conclusion. This avoids having to learn several different conclusion-supplement answers assigned to a single non-factoid question and generating answers whose conclusions and supplements are logically inconsistent with each other. Community-based QA (CQA) websites tend to provide answers composed of conclusion and supplementary statements; from our investigation, 77% of non-factoid answers (love advice) in the Oshiete-goo (https://oshiete.goo.ne.jp) dataset consist of these two statement types. The same is true for 82% of the answers in the Yahoo non-factoid dataset related to the fields of social science, society & culture and arts & humanities. We used the above-mentioned CQA datasets in our evaluations, since they provide diverse answers given by many responders. The results showed that our method outperforms existing ones at generating correct and natural answers. We also conducted an love advice service in Oshiete goo to evaluate the usefulness of our ensemble network. Related work The encoder-decoder framework learns how to transform one representation into another. Contextual LSTM (CLSTM) incorporates contextual features (e.g., topics) into the encoder-decoder framework BIBREF11, BIBREF12. It can be used to make the context of the question a part of the answer generation process. HieRarchical Encoder Decoder (HRED) BIBREF12 extends the hierarchical recurrent encoder-decoder neural network into the dialogue domain; each question can be encoded into a dense context vector, which is used to recurrently decode the tokens in the answer sentences. Such sequential generation of next statement tokens, however, weakens the original meaning of the first statement (question). Recently, several models based on the Transformer BIBREF13, such as for passage ranking BIBREF14, BIBREF15 and answer selection BIBREF16, have been proposed to evaluate question-answering systems. There are, however, few Transformer-based methods that generate non-factoid answers. Recent neural answer selection methods for non-factoid questions BIBREF17, BIBREF18, BIBREF19 learn question and answer representations and then match them using certain similarity metrics. They use open datasets stored at CQA sites like Yahoo! Answers since they include many diverse answers given by many responders and thus are good sources of non-factoid QA training data. The above methods, however, can only select and extract answer sentences, they do not generate them. Recent machine reading comprehension methods try to answer a question with exact text spans taken from provided passages BIBREF20, BIBREF6, BIBREF21, BIBREF22. Several studies on the MS-MARCO dataset BIBREF23, BIBREF5, BIBREF8 define the task as using multiple passages to answer a question where the words in the answer are not necessarily present in the passages. Their models, however, require passages other than QA pairs for both training and testing. Thus, they cannot be applied to CQA datasets that do not have such passages. Furthermore, most of the questions in their datasets only have a single answer. Thus, we think their purpose is different from ours; generating answers for non-factoid questions that tend to demand diverse answers. There are several complex QA tasks such as those present in the TREC complex interactive QA tasks or DUC complex QA tasks. Our method can be applied to those non-factoid datasets if an access fee is paid. Model This section describes our conclusion-supplement answer generation model in detail. An overview of its architecture is shown in Figure FIGREF3. Given an input question sequence ${\bf {Q}} = \lbrace {\bf {q}}_1, \cdots , {\bf {q}}_i, \cdots , {\bf {q}}_{N_q}\rbrace $, the proposal outputs a conclusion sequence ${\bf {C}} = \lbrace {\bf {c}}_1, \cdots , {\bf {c}}_t, \cdots , {\bf {c}}_{N_c}\rbrace $, and supplement sequence ${\bf {S}} = \lbrace {\bf {s}}_1, \cdots , {\bf {s}}_t, \cdots , {\bf {s}}_{N_s}\rbrace $. The goal is to learn a function mapping from ${\bf {Q}}$ to ${\bf {C}}$ and ${\bf {S}}$. Here, ${\bf {q}}_i$ denotes a one-of-$K$ embedding of the $i$-th word in an input sequence of length $N_q$. ${\bf {c}}_t$ (${\bf {s}}_t$) denotes a one-of-$K$ embedding of the $t$-th word in an input sequence of length $N_c$ ($N_s$). Model ::: Encoder The encoder converts the input $\bf {Q}$ into a question embedding, ${\bf {O}}_q$, and hidden states, ${\bf {H}}={\lbrace {\bf {h}}_i\rbrace _i}$. Since the question includes several pieces of background information on the question, e.g. on the users' situation, as well as the question itself, it can be very long and composed of many sentences. For this reason, we use the BiLSTM encoder, which encodes the question in both directions, to better capture the overall meaning of the question. It processes both directions of the input, $\lbrace {\bf {q}}_1, \cdots , {\bf {q}}_{N_q}\rbrace $ and $\lbrace {\bf {q}}_{N_q}, \cdots , {\bf {q}}_{1}\rbrace $, sequentially. At time step $t$, the encoder updates the hidden state by: where $f()$ is an LSTM unit, and ${\bf {h}}^f_i$ and ${\bf {h}}^b_i$ are hidden states output by the forward-direction LSTM and backward-direction LSTM, respectively. We also want to reflect sentence-type information such as conclusion type or supplement type in sequence-to-sequence learning to better understand the conclusion or supplement sequences. We achieve this by adding a sentence type vector for conclusion $\bf {C}$ or for supplement $\bf {S}$ to the input gate, forget gate output gate, and cell memory state in the LSTM model. This is equivalent to processing a composite input [${\bf {q}}_i$, $\bf {C}$] or [${\bf {q}}_i$, $\bf {S}$] in the LSTM cell that concatenates the word embedding and sentence-type embedding vectors. We use this modified LSTM in the above BiLSTM model as: When encoding the question to decode the supplement sequence, ${\bf {S}}$ is input instead of ${\bf {C}}$ in the above equation. The BiLSTM encoder then applies a max-pooling layer to all hidden vectors to extract the most salient signal for each word. As a result, it generates a fixed-sized distributed vector representation for the conclusion, ${\bf {O}}^c_q$, and another for the supplement, ${\bf {O}}^s_q$. ${\bf {O}}^c_q$ and ${\bf {O}}^s_q$ are different since the encoder is biased by the corresponding sentence-type vector, $\bf {C}$ or $\bf {S}$. As depicted in Figure FIGREF3, the BiLSTM encoder processes each word with a sentence-type vector (i.e. $\bf {C}$ or $\bf {S}$) and the max-pooling layer to produce the question embedding ${\bf {O}}^c_q$ or ${\bf {O}}^s_q$. These embeddings are used as context vectors in the decoder network for the conclusion and supplement. Model ::: Decoder The decoder is composed of a conclusion decoder and supplement decoder. Here, let ${\bf {h}}^{\prime }_t$ be the hidden state of the $t$-th LSTM unit in the conclusion decoder. Similar to the encoder, the decoder also decodes a composite input [${\bf {c}}_t$, $\bf {C}$] in an LSTM cell that concatenates the conclusion word embedding and sentence-type embedding vectors. It is formulated as follows: where $f^{\prime }()$ denotes the conclusion decoder LSTM, $\operatornamewithlimits{softmax}_c$ the probability of word $c$ given by a softmax layer, $c_t$ the $t$-th conclusion decoded token, and ${\bf {c}}_t$ the word embedding of $c_t$. The supplement decoder's hidden state ${\bf {h}}^{\prime \prime }_t$ is computed in the same way with ${\bf {h}}^{\prime }_t$; however, it is updated in the ensemble network described in the next subsection. As depicted in Figure FIGREF3, the LSTM decoder processes tokens according to question embedding ${\bf {O}}^c_q$ or ${\bf {O}}^s_q$, which yields a bias corresponding to the sentence-type vector, $\bf {C}$ or $\bf {S}$. The output states are then input to the ensemble network. Model ::: Ensemble network The conventional encoder-decoder framework often generates short and simple sentences that fail to adequately answer non-factoid questions. Even if we force it to generate longer answers, the decoder output sequences become incoherent when read from the beginning to the end. The ensemble network solves the above problem by (1) passing the context from the conclusion decoder's output sequence to the supplementary decoder hidden states via an attention mechanism, and (2) considering the closeness of the encoder's input sequence to the decoders' output sequences as well as the closeness of the encoder's input sequence to the combination of decoded output sequences. (1) To control the context, we assess all the information output by the conclusion decoder and compute the conclusion vector, ${\bf {O}}_c$. ${\bf {O}}_c$ is a sentence-level representation that is more compact, abstractive, and global than the original decoder output sequence. To get it, we apply BiLSTM to the conclusion decoder's output states $\lbrace {{{\tilde{\bf {y}}}}_t^c} \rbrace _t$; i.e., $\lbrace {{{\tilde{\bf {y}}}}_t^c} \rbrace _t = \lbrace {\bf {U}}\cdot \operatornamewithlimits{softmax}({\bf {h}}^{\prime }_t)\rbrace _t$, where word representation matrix $\bf {U}$ holds the word representations in its columns. At time step $t$, the BiLSTM encoder updates the hidden state by: where ${\bf {h}}^{c,f}_t$ and ${\bf {h}}^{c,b}_t$ are the hidden states output by the forward LSTM and backward LSTM in the conclusion encoder, respectively. It applies a max-pooling layer to all hidden vectors to extract the most salient signal for each word to compute the embedding for conclusion ${\bf {O}}_c$. Next, it computes the context vector ${\bf {cx}}_t$ at the $t$-th step by using the $(t\!\!-\!\!1)$-th output hidden state of the supplement decoder, ${\bf {h}}^{\prime \prime }_{t\!-\!1}$, weight matrices, ${\bf {V}}_a$ and ${\bf {W}}_a$, and a sigmoid function, $\sigma $: This computation lets our ensemble network extract a conclusion-sentence level context. The resulting supplement sequences follow the context of the conclusion sequence. Finally, ${{\bf {h}}}^{\prime \prime }_t$ is computed as: $z$ can be $i$, $f$, or $o$, which represent three gates (e.g., input ${\bf {i}}_t$, forget ${\bf {f}}_t$, and output ${\bf {o}}_t$). ${\bf {l}}_t$ denotes a cell memory vector. ${{\bf {W}}}^a_z$ and ${{\bf {W}}}^a_l$ denote attention parameters. (2) To control the closeness at the sentence level and sentence-combination level, it assesses all the information output by the supplement decoder and computes the supplement vector, ${\bf {O}}_s$, in the same way as it computes ${\bf {O}}_c$. That is, it applies BiLSTM to the supplement decoder's output states $\lbrace {{{\tilde{\bf {y}}}}_t^s} \rbrace _t$; i.e., $\lbrace {{{\tilde{\bf {y}}}}_t^s} \rbrace _t = \lbrace {\bf {U}}\!\cdot \! \operatornamewithlimits{softmax}({{\bf {h}}_t^{\prime \prime }})\rbrace _t$, where the word representations are found in the columns of $\bf {U}$. Next, it applies a max-pooling layer to all hidden vectors in order to compute the embeddings for supplement ${\bf {O}}_s$. Finally, to generate the conclusion-supplement answers, it assesses the closeness of the embeddings for the question ${\bf {O}}_q$ to those for the answer sentences (${\bf {O}}_c$ or ${\bf {O}}_s$) and their combination ${\bf {O}}_c$ and ${\bf {O}}_s$. The loss function for the above metrics is described in the next subsection. As depicted in Figure FIGREF3, the ensemble network computes the conclusion embedding ${\bf {O}}_c$, the attention parameter weights from ${\bf {O}}_c$ to the decoder output supplement states (dotted lines represent attention operations), and the supplement embedding ${\bf {O}}_s$. Then, ${\bf {O}}_c$ and ${\bf {O}}_s$ are input to the loss function together with the question embedding ${\bf {O}}_q = [{\bf {O}}^c_q,{\bf {O}}^s_q]$. Model ::: Loss function of ensemble network Our model uses a new loss function rather than generative supervision, which aims to maximize the conditional probability of generating the sequential output $p({\bf {y}}|{\bf {q}})$. This is because we think that assessing the closeness of the question and an answer sequence as well as the closeness of the question to two answer sequences is useful for generating natural-sounding answers. The loss function is for optimizing the closeness of the question and conclusion and that of the question and supplement as well as for optimizing the closeness of the question with the combination of the conclusion and supplement. The training loss ${\cal {L}}_s$ is expressed as the following hinge loss, where ${\bf {O}}^{+}$ is the output decoder vector for the ground-truth answer, ${\bf {O}}^{-}$ is that for an incorrect answer randomly chosen from the entire answer space, $M$ is a constant margin, and $\mathbb {A}$ is set equal to $\lbrace [{\bf {O}}^{+}_c, {\bf {O}}^{-}_s], [{\bf {O}}^{-}_c, {\bf {O}}^{+}_s], [{\bf {O}}^{-}_c, {\bf {O}}^{-}_s]\rbrace $: The key idea is that ${\cal {L}}_s$ checks whether or not the conclusion, supplement, and their combination have been well predicted. In so doing, ${\cal {L}}_s$ can optimize not only the prediction of the conclusion or supplement but also the prediction of the combination of conclusion and supplement. The model is illustrated in the upper part of Figure FIGREF3; $({\bf {O}}_q, {\bf {O}}_c, {\bf {O}}_s)$ is input to compute the closeness and sequence combination losses. Model ::: Training The training loss ${\cal {L}}_w$ is used to check ${\cal {L}}_s$ and the cross-entropy loss in the encoder-decoder model. In the following equation, the conclusion and supplement sequences are merged into one sequence $\bf {Y}$ of length $T$, where $T\!=\!N_c\!+\!N_s$. $\alpha $ is a parameter to control the weighting of the two losses. We use adaptive stochastic gradient descent (AdaGrad) to train the model in an end-to-end manner. The loss of a training batch is averaged over all instances in the batch. Figure FIGREF3 illustrates the loss for the ensemble network and the cross-entropy loss. Evaluation ::: Compared methods We compared the performance of our method with those of (1) Seq2seq, a seq2seq attention model proposed by BIBREF4; (2) CLSTM, i.e., the CLSTM model BIBREF11; (3) Trans, the Transformer BIBREF13, which has proven effective for common NLP tasks. In these three methods, conclusion sequences and supplement sequences are decoded separately and then joined to generate answers. They give more accurate results than methods in which the conclusion sequences and supplement sequences are decoded sequentially. We also compared (4) HRED, a hierarchical recurrent encoder-decoder model BIBREF12 in which conclusion sequences and supplement sequences are decoded sequentially to learn the context from conclusion to supplement; (5) NAGMWA, i.e., our neural answer generation model without an attention mechanism. This means that NAGMWA does not pass ${\bf {cx}}_t$ in Eq. (DISPLAY_FORM10) to the decoder, and conclusion decoder and supplement decoder are connected only via the loss function ${\cal {L}}_s$. In the tables and figures that follow, NAGM means our full model. Evaluation ::: Dataset Our evaluations used the following two CQA datasets: Evaluation ::: Dataset ::: Oshiete-goo The Oshiete-goo dataset includes questions stored in the “love advice” category of the Japanese QA site, Oshiete-goo. It has 771,956 answers to 189,511 questions. We fine-tuned the model using a corpus containing about 10,032 question-conclusion-supplement (q-c-s) triples. We used 2,824 questions from the Oshiete-goo dataset. On average, the answers to these questions consisted of about 3.5 conclusions and supplements selected by human experts. The questions, conclusions, and supplements had average lengths of 482, 41, and 46 characters, respectively. There were 9,779 word tokens in the questions and 6,317 tokens in answers; the overlap was 4,096. Evaluation ::: Dataset ::: nfL6 We also used the Yahoo nfL6 dataset, the largest publicly available English non-factoid CQA dataset. It has 499,078 answers to 87,361 questions. We fine-tuned the model by using questions in the “social science”, “society & culture”, and “arts & humanities” categories, since they require diverse answers. This yielded 114,955 answers to 13,579 questions. We removed answers that included some stop words, e.g. slang words, or those that only refer to some URLs or descriptions in literature, since such answers often become noise when an answer is generated. Human experts annotated 10,299 conclusion-supplement sentences pairs in the answers. In addition, we used a neural answer-sentence classifier to classify the sentences into conclusion or supplement classes. It first classified the sentences into supplements if they started with phrases such as “this is because” or “therefore”. Then, it applied a BiLSTM with max-pooling to the remaining unclassified sentences, ${\bf {A}} = \lbrace {\bf {a}}_1, {\bf {a}}_2, \cdots , {\bf {a}}_{N_a}\rbrace $, and generated embeddings for the un-annotated sentences, ${\bf {O}}^a$. After that, it used a logistic sigmoid function to return the probabilities of mappings to two discrete classes: conclusion and supplement. This mapping was learned by minimizing the classification errors using the above 10,299 labeled sentences. As a result, we automatically acquired 70,000 question-conclusion-supplement triples from the entire answers. There were 11,768 questions and 70,000 answers. Thus, about 6 conclusions and supplements on average were assigned to a single question. The questions, conclusions, and supplements had average lengths of 46, 87, and 71 characters, respectively. We checked the performance of the classifier; human experts checked whether the annotation results were correct or not. They judged that it was about 81% accurate (it classified 56,762 of 70,000 sentences into correct classes). There were 15,690 word tokens in questions and 124,099 tokens in answers; the overlap was 11,353. Evaluation ::: Methodology We conducted three evaluations using the Oshiete-goo dataset; we selected three different sets of 500 human-annotated test pairs from the full dataset. In each set, we trained the model by using training pairs and input questions in test pairs to the model. We repeated the experiments three times by randomly shuffling the train/test sets. For the evaluations using the nfL6 dataset, we prepared three different sets of 500 human-annotated test q-c-s triples from the full dataset. We used 10,299 human-annotated triples to train the neural sentence-type classifier. Then, we applied the classifier to the unlabeled answer sentences. Finally, we evaluated the answer generation performance by using three sets of machine-annotated 69,500 triples and 500 human-annotated test triples. After training, we input the questions in the test triples to the model to generate answers for both datasets. We compared the generated answers with the correct answers. The results described below are average values of the results of three evaluations. The softmax computation was slow since there were so many word tokens in both datasets. Many studies BIBREF24, BIBREF25, BIBREF3 restricted the word vocabulary to one based on frequency. This, however, narrows the diversity of the generated answers. Since diverse answers are necessary to properly reply to non-factoid questions, we used bigram tokens instead of word tokens to speed up the computation without restricting the vocabulary. Accordingly, we put 4,087 bigram tokens in the Oshiete-goo dataset and 11,629 tokens in the nfL6 dataset. To measure performance, we used human judgment as well as two popular metrics BIBREF2, BIBREF25, BIBREF4 for measuring the fluency of computer-generated text: ROUGE-L BIBREF26 and BLEU-4 BIBREF27. ROUGE-L is used for measuring the performance for evaluating non-factoid QAs BIBREF28, however, we also think human judgement is important in this task. Evaluation ::: Parameter setup For both datasets, we tried different parameter values and set the size of the bigram token embedding to 500, the size of LSTM output vectors for the BiLSTMs to $500 \times 2$, and number of topics in the CLSTM model to 15. We tried different margins, $M$, in the hinge loss function and settled on $0.2$. The iteration count $N$ was set to 100. We varied $\alpha $ in Eq. (DISPLAY_FORM13) from 0 to 2.0 and checked the impact of $L_s$ by changing $\alpha $. Table TABREF18 shows the results. When $\alpha $ is zero, the results are almost as poor as those of the seq2seq model. On the other hand, while raising the value of $\alpha $ places greater emphasis on our ensemble network, it also degrades the grammaticality of the generated results. We set $\alpha $ to 1.0 after determining that it yielded the best performance. This result clearly indicates that our ensemble network contributes to the accuracy of the generated answers. A comparison of our full method NAGM with the one without the sentence-type embedding (we call this method w/o ste) that trains separate decoders for two types of sentences is shown in Table TABREF19. The result indicated that the existence of the sentence type vector, $\bf {C}$ or $\bf {S}$, contributes the accuracy of the results since it distinguishes between sentence types. Evaluation ::: Results ::: Performance The results for Oshiete-goo are shown in Table TABREF20 and those for nfL6 are shown in Table TABREF21. They show that CLSTM is better than Seq2seq. This is because it incorporates contextual features, i.e. topics, and thus can generate answers that track the question's context. Trans is also better than Seq2seq, since it uses attention from the question to the conclusion or supplement more effectively than Seq2seq. HRED failed to attain a reasonable level of performance. These results indicate that sequential generation has difficulty generating subsequent statements that follow the original meaning of the first statement (question). NAGMWA is much better than the other methods except NAGM, since it generates answers whose conclusions and supplements as well as their combinations closely match the questions. Thus, conclusions and supplements in the answers are consistent with each other and avoid confusion made by several different conclusion-supplement answers assigned to a single non-factoid questions. Finally, NAGM is consistently superior to the conventional attentive encoder-decoders regardless of the metric. Its ROUGE-L and BLEU-4 scores are much higher than those of CLSTM. Thus, NAGM generates more fluent sentences by assessing the context from conclusion to supplement sentences in addition to the closeness of the question and sentences as well as that of the question and sentence combinations. Evaluation ::: Results ::: Human evaluation Following evaluations made by crowdsourced evaluators BIBREF29, we conducted human evaluations to judge the outputs of CLSTM and those of NAGM. Different from BIBREF29, we hired human experts who had experience in Oshiete-goo QA community service. Thus, they were familiar with the sorts of answers provided by and to the QA community. The experts asked questions, which were not included in our training datasets, to the AI system and rated the answers; one answer per question. The experts rated the answers as follows: (1) the content of the answer matched the question, and the grammar was okay; (2) the content was suitable, but the grammar was poor; (3) the content was not suitable, but the grammar was okay; (4) both the content and grammar were poor. Note that our evaluation followed the DUC-style strategy. Here, we mean “grammar” to cover grammaticality, non-redundancy, and referential clarity in the DUC strategy, whereas we mean the “content matched the questions” to refer to “focus” and “structure and coherence” in the DUC strategy. The evaluators were given more than a week to carefully evaluate the generated answers, so we consider that their judgments are reliable. Each expert evaluated 50 questions. We combined the scores of the experts by summing them. They did not know the identity of the system in the evaluation and reached their decisions independently. Table TABREF22 and Table TABREF22 present the results. The numbers are percentages. Table 7 presents examples of questions and answers. For Oshiete-goo results, the original Japanese and translated English are presented. The questions are very long and include long background descriptions before the questions themselves. These results indicate that the experts were much more satisfied with the outputs of NAGM than those of CLSTM. This is because, as can be seen in Table 7, NAGM generated longer and better question-related sentences than CLSTM did. NAGM generated grammatically good answers whose conclusion and supplement statements are well matched with the question and the supplement statement naturally follows the conclusion statement. Evaluation ::: Results ::: Generating answers missing from the corpus The encoder-decoder network tends to re-generate answers in the training corpus. On the other hand, NAGM can generate answers not present in the corpus by virtue of its ensemble network that considers contexts and sentence combinations. Table 7 lists some examples. For example, answer #1 generated by NAGM is not in the training corpus. We think it was generated from the parts in italics in the following three sentences that are in the corpus: (1) “I think that it is better not to do anything from your side. If there is no reaction from him, it is better not to do anything even if there is opportunity to meet him next.” (2) “I think it may be good for you to approach your lover. Why don't you think positively about it without thinking too pessimistically?” (3) “Why don't you tell your lover that you usually do not say what you are thinking. $\cdots $ I think that it is important to communicate the feelings to your lover; how you like or care about him/her especially when you are quarreling with each other.” The generation of new answers is important for non-factoid answer systems, since they must cope with slight differences in question contexts from those in the corpus. Evaluation ::: Results ::: Online evaluation in “Love Advice” service Our ensemble network is currently being used in the love advice service of Oshiete goo BIBREF30. The service uses only the ensemble network to ensure that the service offers high-quality output free from grammar errors. We input the sequences in our evaluation corpus instead of the decoder output sequences into the ensemble network. Our ensemble network then learned the optimum combination of answer sequences as well as the closeness of the question and those sequences. As a result, it can construct an answer that corresponds to the situation underlying the question. In particular, 5,702 answers created by the AI, whose name is Oshi-el (Oshi-el means teaching angel), using our ensemble network in reply to 33,062 questions entered from September 6th, 2016 to November 17th, 2019, were judged by users of the service as good answers. Oshi-el output good answers at about twice the rate of the average human responder in Oshiete-goo who answered more than 100 questions in the love advice category. Thus, we think this is a good result. Furthermore, to evaluate the effectiveness of the supplemental information, we prepared 100 answers that only contained conclusion sentences during the same period of time. As a result, users rated the answers that contained both conclusion and supplement sentences as good 1.6 times more often than those that contained only conclusion sentences. This shows that our method successfully incorporated supplemental information in answering non-factoid questions. Conclusion We tackled the problem of conclusion-supplement answer generation for non-factoid questions, an important task in NLP. We presented an architecture, ensemble network, that uses an attention mechanism to reflect the context of the conclusion decoder's output sequence on the supplement decoder's output sequence. The ensemble network also assesses the closeness of the encoder input sequence to the output of each decoder and the combined output sequences of both decoders. Evaluations showed that our architecture was consistently superior to conventional encoder-decoders in this task. The ensemble network is now being used in the “Love Advice,” service as mentioned in the Evaluation section. Furthermore, our method, NAGM, can be generalized to generate much longer descriptions other than conclusion-supplement answers. For example, it is being used to generate Tanka, which is a genre of classical Japanese poetry that consists of five lines of words, in the following way. The first line is input by a human user to NAGM as a question, and NAGM generates second line (like a conclusion) and third line (like a supplement). The third line is again input to NAGM as a question, and NAGM generates the fourth line (like a conclusion) and fifth line (like a supplement).
For the Oshiete-goo dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, Trans, by 0.021, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.037. For the nfL6 dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, CLSTM, by 0.028, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.040. Human evaluation of the NAGM's generated outputs for the Oshiete-goo dataset had 47% ratings of (1), the highest rating, while CLSTM only received 21% ratings of (1). For the nfL6 dataset, the comparison of (1)'s was NAGM's 50% to CLSTM's 30%.
cb4727cd5643dabc3f5c95e851d5313f5d979bdc
cb4727cd5643dabc3f5c95e851d5313f5d979bdc_0
Q: How big is improvement over the old state-of-the-art performance on CoNLL-2009 dataset? Text: Introduction The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Since the nature of semantic roles is more abstract than the syntactic dependencies, SRL has a wide range of applications in different areas, e.g., text classification BIBREF0, text summarization BIBREF1, BIBREF2, recognizing textual entailment BIBREF3, BIBREF4, information extraction BIBREF5, question answering BIBREF6, BIBREF7, and so on. UTF8gbsn Traditionally, syntax is the bridge to reach semantics. However, along with the popularity of the end-to-end models in the NLP community, various recent studies have been discussing the necessity of syntax in the context of SRL. For instance, BIBREF8 have observed that only good syntax helps with the SRL performance. BIBREF9 have explored what kind of syntactic information or structure is better suited for the SRL model. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and claim that the syntax-agnostic model surpasses the syntax-aware ones. In this paper, we focus on analyzing the relationship between the syntactic dependency information and the SRL performance. In particular, we investigate the following four aspects: 1) Quality of the syntactic information: whether the performance of the syntactic parser output affects the SRL performance; 2) Representation of the syntactic information: how to represent the syntactic dependencies to better preserve the original structural information; 3) Incorporation of the syntactic information: at which layer of the SRL model and how to incorporate the syntactic information; and 4) the Relationship with other external resources: when we append other external resources into the SRL model, whether their contributions are orthogonal to the syntactic dependencies. For the main architecture of the SRL model, many neural-network-based models use BiLSTM as the encoder (e.g., BIBREF10, BIBREF11, BIBREF12), while recently self-attention-based encoder becomes popular due to both the effectiveness and the efficiency BIBREF13, BIBREF14, BIBREF15. By its nature, the self-attention-based model directly captures the relation between words in the sentence, which is convenient to incorporate syntactic dependency information. BIBREF15 replace one attention head with pre-trained syntactic dependency information, which can be viewed as a hard way to inject syntax into the neural model. Enlightened by the machine translation model proposed by BIBREF16, we introduce the Relation-Aware method to incorporate syntactic dependencies, which is a softer way to encode richer structural information. Various experiments for the Chinese SRL on the CoNLL-2009 dataset are conducted to evaluate our hypotheses. From the empirical results, we observe that: 1) The quality of the syntactic information is essential when we incorporate structural information into the SRL model; 2) Deeper integration of the syntactic information achieves better results than the simple concatenation to the inputs; 3) External pre-trained contextualized word representations help to boost the SRL performance further, which is not entirely overlapping with the syntactic information. In summary, the contributions of our work are: We present detailed experiments on different aspects of incorporating syntactic information into the SRL model, in what quality, in which representation and how to integrate. We introduce the relation-aware approach to employ syntactic dependencies into the self-attention-based SRL model. We compare our approach with previous studies, and achieve state-of-the-art results with and without external resources, i.e., in the so-called closed and open settings. Related work Traditional semantic role labeling task BIBREF17 presumes that the syntactic structure of the sentence is given, either being a constituent tree or a dependency tree, like in the CoNLL shared tasks BIBREF18, BIBREF19, BIBREF20. Recent neural-network-based approaches can be roughly categorized into two classes: 1) making use of the syntactic information BIBREF21, BIBREF22, BIBREF23, BIBREF24, and 2) pure end-to-end learning from tokens to semantic labels, e.g., BIBREF25, BIBREF26. BIBREF22 utilize an LSTM model to obtain embeddings from the syntactic dependency paths; while BIBREF24 construct Graph Convolutional Networks to encode the dependency structure. Although BIBREF8's approach is a pure end-to-end learning, they have included an analysis of adding syntactic dependency information into English SRL in the discussion section. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and BIBREF9 have compared different ways to represent and encode the syntactic knowledge. In another line of research, BIBREF14 utilize the Transformer network for the encoder instead of the BiLSTM. BIBREF15 present a novel and effective multi-head self-attention model to incorporate syntax, which is called LISA (Linguistically-Informed Self-Attention). We follow their approach of replacing one attention head with the dependency head information, but use a softer way to capture the pairwise relationship between input elements BIBREF16. For the datasets and annotations of the SRL task, most of the previous research focuses on 1) PropBank BIBREF27 and NomBank BIBREF28 annotations, i.e., the CoNLL 2005 BIBREF18 and CoNLL 2009 BIBREF20 shared tasks; 2) OntoNotes annotations BIBREF29, i.e., the CoNLL 2005 and CoNLL 2012 datasets and more; 3) and FrameNet BIBREF30 annotations. For the non-English languages, not all of them are widely available. Apart from these, in the broad range of semantic processing, other formalisms non-exhaustively include abstract meaning representation BIBREF31, universal decompositional semantics BIBREF32, and semantic dependency parsing BIBREF33. BIBREF34 give a better overview of various semantic representations. In this paper, we primarily work on the Chinese and English datasets from the CoNLL-2009 shared task and focus on the effectiveness of incorporating syntax into the Chinese SRL task. Approaches In this section, we first introduce the basic architecture of our self-attention-based SRL model, and then present two different ways to encode the syntactic dependency information. Afterwards, we compare three approaches to incorporate the syntax into the base model, concatenation to the input embedding, LISA, and our proposed relation-aware method. Approaches ::: The Basic Architecture Our basic model is a multi-head self-attention-based model, which is effective in SRL task as previous work proves BIBREF35. The model consists of three layers: the input layer, the encoder layer and the prediction layer as shown in Figure FIGREF5. Approaches ::: The Basic Architecture ::: Input Layer The input layer contains three types of embeddings: token embedding, predicate embedding, and positional embedding. Token Embedding includes word embedding, part-of-speech (POS) tag embedding. Predicate Embedding has been proposed by BIBREF8, and its binary embedding is used to indicate the predicates indices in each sentence. Positional Embedding encodes the order of the input word sequence. We follow BIBREF13 to use time positional embedding, which is formulated as follows: where $t$ is the position, $i$ means the dimension, and $d$ is the dimension of the model input embedding. Approaches ::: The Basic Architecture ::: Encoder Layer The self-attention block is almost the same as Transformer encoder proposed by BIBREF13. Specifically the Transformer encoder contains a feed-forward network (FFN) and a multi-head attention network. The former is followed by the latter. In this work, we exchange their order, so that the multi-head attention module is moved behind the FFN module as Figure FIGREF5 shows. FFN The FFN module consists of two affine layers with a ReLU activation in the middle. Formally, we have the following equation: Multi-Head Attention The basic attention mechanism used in the multi-head attention function is called “Scaled Dot-Product Attention”, which is formulated as follows: where $Q$ is queries, $K$ is keys, and $V$ is values. In the multi-head attention setting, it first maps the input matrix $X$ into queries, keys and values matrices by using $h$ different learned linear projections. Taking queries $Q$ as an example: where $0 \le i < h$. Keys and values use similar projections. On each of these projections, we perform the scaled dot-product attention in parallel. These parallel output values are concatenated and once again projected into the final values. Equation DISPLAY_FORM14 depicts the above operations. where More details about multi-head attention can be found in BIBREF13. Add & Norm We employ a residual connection to each module, followed by a layer normalization BIBREF36 operation. The output of each module is formulated as where $f(x)$ is implemented by each above module. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Head & Relation The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short. Except for LISA, where Dep is a one-hot matrix of dependency head word index described in SECREF25, in other cases, we use the corresponding head word. Rel is the dependency relation between the word and its syntactic head. We take both Dep and Rel as common strings and map them into dense vectors in the similar way of word embedding. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Path & Relation Path In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath). To generate DepPath & RelPath between candidate argument and predicate, we firstly find their lowest common ancestor. Then we get two sub-paths, one is from the ancestor to the predicate and the other is from the ancestor to the argument. For DepPath, we compute distance from ancestor to predicate and argument respectively and then concatenate two distances with the separator `,'. For RelPath, we concatenate the labels appearing in each sub-path with the separator “_" respectively to get two label paths, and then concatenate the two label paths with the separator `,'. UTF8gbsn As shown in Figure FIGREF21, the lowest common ancestor of the predicate “鼓励 (encourage)" and the candidate argument “农业 (agriculture)" is “鼓励 (encourage)", so their DepPath is “2,0" and its RelPath is “COMP_COMP,". We take both DepPath and RelPath as common strings and map them into dense vectors in the similar way of Dep and Rel. UTF8gbsn Approaches ::: Incorporation Methods ::: Input Embedding Concatenation To incorporate syntactic knowledge, one simple method is to take it as part of the neural network input, denoted as Input. We represent the syntactic information with dense vectors, and concatenate it with other information like word embedding: where $\oplus $ means concatenation; $E_W$ means the original inputs of the neural model and $E_S$ means the embedding of syntax information, such as Dep/Rel or DepPath/RelPath. Approaches ::: Incorporation Methods ::: LISA BIBREF15 propose the linguistically-informed self-attention model (LISA for short) to combine SRL and dependency parsing as multi-task learning in a subtle way. Based on the multi-head self-attention model, LISA uses one attention head to predict the dependency results and it can also directly use pre-trained dependency head results to replace the attention matrix during testing. Being different from their multi-task learning, we make the replacement of one attention head during both training and testing. Instead of the original $softmax$ attention matrix, we use a one-hot matrix, generated by mapping the dependency head index of each word into a 0-1 vector of the sentence length as Figure FIGREF27 shows. We add the dependency relation information with $V$ in the replaced head so that we can make full use of the syntactic knowledge. The replaced attention head is formulated as follows: where $M_D$ is the one-hot dependency head matrix and $E_R$ means the embedding of dependency relation information, such as Rel or RelPath. Approaches ::: Incorporation Methods ::: Relation-Aware Self-Attention Relation-aware self-attention model (RelAwe for brevity) incorporates external information into the attention. By this way, the model considers the pairwise relationships between input elements, which highly agrees with the task of SRL, i.e., aiming to find the semantic relations between the candidate argument and predicate in one sentence. Compared to the standard attention, in this paper, we add the dependency information into $Q$ and $V$ in each attention head, like equation (DISPLAY_FORM15) shows: where $E_D$ and $E_R$ mean the syntactic dependency head and relation information respectively. For our multi-layer multi-head self-attention model, we make this change to each head of the first $N$ self-attention layers. Experiment ::: Settings Datasets & Evaluation Metrics Our experiments are conducted on the CoNLL-2009 shared task dataset BIBREF20. We use the official evaluation script to compare the output of different system configurations, and report the labeled precision (P), labeled recall (R) and labeled f-score (F1) for the semantic dependencies. Word Representations Most of our experiments are conducted in the closed setting without any external word embeddings or data resources than those provided by the CoNLL-2009 datasets. In the closed setting, word embedding is initialized by a Gaussian distribution with mean 0 and variance $\frac{1}{\sqrt{d}}$, where $d$ is the dimension of embedding size of each layer. For the experiments with external resources in the open setting, we utilize 1) word embeddings pre-trained with GloVe BIBREF37 on the Gigaword corpus for Chinese and the published embeddings with 100 dimensions pre-trained on Wikipedia and Gigaword for English; and 2) ELMo BIBREF38 and BERT BIBREF39, two recently proposed effective deep contextualized word representations. Other embeddings, i.e., POS embedding, linguistic knowledge embedding, and so on are initialized in same way as random word embedding no matter in closed or open setting. Syntactic Parsers In Table TABREF30, both Auto and Gold syntactic dependencies are provided by the dataset. Since the performance of the Auto is far behind the state-of-the-art BiaffineParser BIBREF40, we generate more dependency results by training BiaffineParser with different external knowledge, including pre-trained word embedding and BERT. Performance for different parsers is listed in Table TABREF30. Parameters In this work, we set word embedding size $d_w=100$, POS embedding size $d_t=50$. The predicate embedding size is set as $d_p=100$. The syntax-related embedding size varies along with different configurations, so as the feature embedding size $d_f$. To facilitate residual connections, all sub-layers in the model produce outputs of dimension $d_{model}=d_f+d_p$. The hidden dimension $d_{ff}=800$ is applied for all the experiments. We set the number of shared self-attention blocks $N=10$. The number of heads varies with $d_{model}$, but dimension of each head is 25. Besides, LISA incorporates syntax knowledge in the 5-th self-attention layer while RelAwe incorporates in the first 5 layers. We apply the similar dropout strategy as BIBREF13, i.e., the attention and residual dropout values are $0.2$ and $0.3$ respectively. The dropout is also applied in the middle layer of FFN with value $0.2$. We also employ label smoothing BIBREF41 of value $0.1$ during training. We use softmax-cross-entropy as our loss function, and use the Adadelta optimizer BIBREF42 with $\epsilon =10^{-6}$ and $\rho =0.95$. For all experiments, we train the model $200,000$ steps with learning rate $lr=1.0$, and each batch has 4096 words. All the hyper-parameters are tuned on the development set. Configurations We use different abbreviations to represent the parsing results, syntactic dependency representations, and incorporation methods. All the system configurations in our experiments are listed in Table TABREF36. Experiment ::: Quality of the Syntactic Dependencies We use the above-mentioned dependency trees of different quality for comparison, with Dep&Rel representation on our RelAwe model. In addition, we generate one more data AutoDel by deleting all the erroneous dependency heads and relations from the provided Auto data according to the gold heads and relations, and we do not replace them with any alternative heads and relations. We take this setting as another reference (along with GOLD) to indicate that erroneous syntax information may hurt the performance of the SRL model. We take the Gold as the upperbound reference of our task setting. Experiment results in Table TABREF37 demonstrate that, incorporating syntactic knowledge into the SRL model can achieve better performance and overall, the better the quality is, the better the SRL model performs. This is consistent with the previous study by BIBREF8 on the English dataset. Closer observation reveals two additional interesting phenomena. Firstly, SRL performance improvement is not proportionate to the improvement of dependency quality. When switching syntactic dependency trees from Auto to Biaffine, SRL performance improves 0.5%, although syntactic dependency improves about 8%. In contrast, the difference between Biaffine and BiaffineBert shows more significant improvement of 1.5%. The possible reason is that BiaffineBert provides key dependency information which is missing in other configurations. Secondly, the SRL performance gap between AutoDel and Auto is large though they provide the same correct syntactic information. This may indicate that incorporating erroneous syntactic knowledge hurts the SRL model, and even providing more correct dependencies cannot make up for the harm (cf. BiaffineBert). Experiment ::: Representation of the Syntactic Dependencies Apart from Dep and Rel, we also use DepPath and RelPath to encode the syntactic knowledge. In this subsection, we conduct experiments to compare different syntactic encoding in our SRL model. We base the experiments on our RelAwe model, since it is easier to incorporate different representations for comparison. When generating the RelPath, we filter the paths 1) when the dependency distance between the predicate and the candidate argument is more than 4, and 2) when the RelPath's frequency is less than 10. No matter in which representation, dependency label information is more important than the head and the combination of the two achieves better performance as our experiment results in Table TABREF41 show. Furthermore, using Biaffine dependency trees, DepPath and RelPath perform better than Dep and Rel. This is because of the capability of DepPath and RelPath to capture more structural information of the dependency trees. Comparing Table TABREF37 and TABREF41, when using gold dependencies, DepPath&RelPath can achieve much better result than Dep&Rel. But with the Auto trees, DepPath&RelPath is much worse. Therefore, structural information is much more sensitive to the quality of dependency trees due to error propagation. Experiment ::: Incorporation Methods [9]From the mechanism of LISA, we can find that the replaced attention head can't copy the syntactic dependency heads from DepPath. This subsection discusses the effectiveness of different incorporation methods of the syntactic knowledge. We take Biaffine's output as our dependency information for the comparison. Firstly, results in Table TABREF44 show that with little dependency information (Dep), LISA performs better, while incorporating richer syntactic knowledge (Dep&Rel or Dep&RelPath), three methods achieve similar performance. Overall, RelAwe achieves best results given enough syntactic knowledge. Secondly, Input and LISA achieve much better performance when we combine the dependency head information and the relation, while BIBREF15 have not introduced relation information to the LISA model and BIBREF9 have not combined the head and relation information either. Our proposed RelAwe method with DepPath&RelPath representation performs the best, which encodes the richest syntactic knowledge. Lastly, under the same settings, LISA and RelAwe perform better than Input, which indicates the importance of the location where the model incorporates the syntax, the input layer vs. the encoder layer. Experiment ::: External Resources Apart from the experiments with syntactic knowledge itself, we also compare different external resources to discover their relationship with the syntax, including pre-trained word embeddings, ELMo, and BERT. We conduct experiments with our best setting, the RelAwe model with DepPath & RelPath and the results are listed in Table TABREF45. The plain word embedding improves a little in such settings with syntactic information, while for the newly proposed Elmo and Bert, both of them can boost the models further. Experiment ::: Final Results on the Chinese Test Data Based on the above experiments and analyses, we present the overall results of our model in this subsection. We train the three models (Input, LISA, and RelAwe) with their best settings without any external knowledge as Closed, and we take the same models with Bert as Open. The DepPath&RelPath from Gold without external knowledge serves as the Gold for reference. Since we have been focusing on the task of argument identification and labeling, for both Closed and Open, we follow BIBREF22 to use existing systems' predicate senses BIBREF43 to exclude them from comparison. Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task. Experiment ::: Results on the English Data We also conduct several experiments on the English dataset to validate the effectiveness of our approaches on other languages than Chinese and the results are in Table TABREF49. Although both configurations are not exactly the same as their original papers, we tried our best to reproduce their methods on the CoNLL2009 dataset for our comparison. Overall, the results are consistent with the Chinese experiments, while the improvement is not as large as the Chinese counterparts. The RelAwe model with DepPath&RelPath still achieves the best performance. Applying our syntax-enhanced model to more languages will be an interesting research direction to work on in the future. [10]We reimplement LISA in BIBREF15 as LISA(Dep), and BIBREF9's best DepPath approach as Input(DepPath). Therefore, we can compare with their work as fairly as possible. Other settings are the best configurations for their corresponding methods. Conclusion and Future Work This paper investigates how to incorporate syntactic dependency information into semantic role labeling in depth. Firstly, we confirm that dependency trees of better quality are more helpful for the SRL task. Secondly, we present different ways to encode the trees and the experiments show that keeping more (correct) structural information during encoding improves the SRL performance. Thirdly, we compare three incorporation methods and discover that our proposed relation-aware self-attention-based model is the most effective one. Although our experiments are primarily on the Chinese dataset, the approach is largely language independent. Apart from our tentative experiments on the English dataset, applying the approach to other languages will be an interesting research direction to work on in the future.
our Open model achieves more than 3 points of f1-score than the state-of-the-art result
33d864153822bd378a98a732ace720e2c06a6bc6
33d864153822bd378a98a732ace720e2c06a6bc6_0
Q: What is new state-of-the-art performance on CoNLL-2009 dataset? Text: Introduction The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Since the nature of semantic roles is more abstract than the syntactic dependencies, SRL has a wide range of applications in different areas, e.g., text classification BIBREF0, text summarization BIBREF1, BIBREF2, recognizing textual entailment BIBREF3, BIBREF4, information extraction BIBREF5, question answering BIBREF6, BIBREF7, and so on. UTF8gbsn Traditionally, syntax is the bridge to reach semantics. However, along with the popularity of the end-to-end models in the NLP community, various recent studies have been discussing the necessity of syntax in the context of SRL. For instance, BIBREF8 have observed that only good syntax helps with the SRL performance. BIBREF9 have explored what kind of syntactic information or structure is better suited for the SRL model. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and claim that the syntax-agnostic model surpasses the syntax-aware ones. In this paper, we focus on analyzing the relationship between the syntactic dependency information and the SRL performance. In particular, we investigate the following four aspects: 1) Quality of the syntactic information: whether the performance of the syntactic parser output affects the SRL performance; 2) Representation of the syntactic information: how to represent the syntactic dependencies to better preserve the original structural information; 3) Incorporation of the syntactic information: at which layer of the SRL model and how to incorporate the syntactic information; and 4) the Relationship with other external resources: when we append other external resources into the SRL model, whether their contributions are orthogonal to the syntactic dependencies. For the main architecture of the SRL model, many neural-network-based models use BiLSTM as the encoder (e.g., BIBREF10, BIBREF11, BIBREF12), while recently self-attention-based encoder becomes popular due to both the effectiveness and the efficiency BIBREF13, BIBREF14, BIBREF15. By its nature, the self-attention-based model directly captures the relation between words in the sentence, which is convenient to incorporate syntactic dependency information. BIBREF15 replace one attention head with pre-trained syntactic dependency information, which can be viewed as a hard way to inject syntax into the neural model. Enlightened by the machine translation model proposed by BIBREF16, we introduce the Relation-Aware method to incorporate syntactic dependencies, which is a softer way to encode richer structural information. Various experiments for the Chinese SRL on the CoNLL-2009 dataset are conducted to evaluate our hypotheses. From the empirical results, we observe that: 1) The quality of the syntactic information is essential when we incorporate structural information into the SRL model; 2) Deeper integration of the syntactic information achieves better results than the simple concatenation to the inputs; 3) External pre-trained contextualized word representations help to boost the SRL performance further, which is not entirely overlapping with the syntactic information. In summary, the contributions of our work are: We present detailed experiments on different aspects of incorporating syntactic information into the SRL model, in what quality, in which representation and how to integrate. We introduce the relation-aware approach to employ syntactic dependencies into the self-attention-based SRL model. We compare our approach with previous studies, and achieve state-of-the-art results with and without external resources, i.e., in the so-called closed and open settings. Related work Traditional semantic role labeling task BIBREF17 presumes that the syntactic structure of the sentence is given, either being a constituent tree or a dependency tree, like in the CoNLL shared tasks BIBREF18, BIBREF19, BIBREF20. Recent neural-network-based approaches can be roughly categorized into two classes: 1) making use of the syntactic information BIBREF21, BIBREF22, BIBREF23, BIBREF24, and 2) pure end-to-end learning from tokens to semantic labels, e.g., BIBREF25, BIBREF26. BIBREF22 utilize an LSTM model to obtain embeddings from the syntactic dependency paths; while BIBREF24 construct Graph Convolutional Networks to encode the dependency structure. Although BIBREF8's approach is a pure end-to-end learning, they have included an analysis of adding syntactic dependency information into English SRL in the discussion section. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and BIBREF9 have compared different ways to represent and encode the syntactic knowledge. In another line of research, BIBREF14 utilize the Transformer network for the encoder instead of the BiLSTM. BIBREF15 present a novel and effective multi-head self-attention model to incorporate syntax, which is called LISA (Linguistically-Informed Self-Attention). We follow their approach of replacing one attention head with the dependency head information, but use a softer way to capture the pairwise relationship between input elements BIBREF16. For the datasets and annotations of the SRL task, most of the previous research focuses on 1) PropBank BIBREF27 and NomBank BIBREF28 annotations, i.e., the CoNLL 2005 BIBREF18 and CoNLL 2009 BIBREF20 shared tasks; 2) OntoNotes annotations BIBREF29, i.e., the CoNLL 2005 and CoNLL 2012 datasets and more; 3) and FrameNet BIBREF30 annotations. For the non-English languages, not all of them are widely available. Apart from these, in the broad range of semantic processing, other formalisms non-exhaustively include abstract meaning representation BIBREF31, universal decompositional semantics BIBREF32, and semantic dependency parsing BIBREF33. BIBREF34 give a better overview of various semantic representations. In this paper, we primarily work on the Chinese and English datasets from the CoNLL-2009 shared task and focus on the effectiveness of incorporating syntax into the Chinese SRL task. Approaches In this section, we first introduce the basic architecture of our self-attention-based SRL model, and then present two different ways to encode the syntactic dependency information. Afterwards, we compare three approaches to incorporate the syntax into the base model, concatenation to the input embedding, LISA, and our proposed relation-aware method. Approaches ::: The Basic Architecture Our basic model is a multi-head self-attention-based model, which is effective in SRL task as previous work proves BIBREF35. The model consists of three layers: the input layer, the encoder layer and the prediction layer as shown in Figure FIGREF5. Approaches ::: The Basic Architecture ::: Input Layer The input layer contains three types of embeddings: token embedding, predicate embedding, and positional embedding. Token Embedding includes word embedding, part-of-speech (POS) tag embedding. Predicate Embedding has been proposed by BIBREF8, and its binary embedding is used to indicate the predicates indices in each sentence. Positional Embedding encodes the order of the input word sequence. We follow BIBREF13 to use time positional embedding, which is formulated as follows: where $t$ is the position, $i$ means the dimension, and $d$ is the dimension of the model input embedding. Approaches ::: The Basic Architecture ::: Encoder Layer The self-attention block is almost the same as Transformer encoder proposed by BIBREF13. Specifically the Transformer encoder contains a feed-forward network (FFN) and a multi-head attention network. The former is followed by the latter. In this work, we exchange their order, so that the multi-head attention module is moved behind the FFN module as Figure FIGREF5 shows. FFN The FFN module consists of two affine layers with a ReLU activation in the middle. Formally, we have the following equation: Multi-Head Attention The basic attention mechanism used in the multi-head attention function is called “Scaled Dot-Product Attention”, which is formulated as follows: where $Q$ is queries, $K$ is keys, and $V$ is values. In the multi-head attention setting, it first maps the input matrix $X$ into queries, keys and values matrices by using $h$ different learned linear projections. Taking queries $Q$ as an example: where $0 \le i < h$. Keys and values use similar projections. On each of these projections, we perform the scaled dot-product attention in parallel. These parallel output values are concatenated and once again projected into the final values. Equation DISPLAY_FORM14 depicts the above operations. where More details about multi-head attention can be found in BIBREF13. Add & Norm We employ a residual connection to each module, followed by a layer normalization BIBREF36 operation. The output of each module is formulated as where $f(x)$ is implemented by each above module. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Head & Relation The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short. Except for LISA, where Dep is a one-hot matrix of dependency head word index described in SECREF25, in other cases, we use the corresponding head word. Rel is the dependency relation between the word and its syntactic head. We take both Dep and Rel as common strings and map them into dense vectors in the similar way of word embedding. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Path & Relation Path In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath). To generate DepPath & RelPath between candidate argument and predicate, we firstly find their lowest common ancestor. Then we get two sub-paths, one is from the ancestor to the predicate and the other is from the ancestor to the argument. For DepPath, we compute distance from ancestor to predicate and argument respectively and then concatenate two distances with the separator `,'. For RelPath, we concatenate the labels appearing in each sub-path with the separator “_" respectively to get two label paths, and then concatenate the two label paths with the separator `,'. UTF8gbsn As shown in Figure FIGREF21, the lowest common ancestor of the predicate “鼓励 (encourage)" and the candidate argument “农业 (agriculture)" is “鼓励 (encourage)", so their DepPath is “2,0" and its RelPath is “COMP_COMP,". We take both DepPath and RelPath as common strings and map them into dense vectors in the similar way of Dep and Rel. UTF8gbsn Approaches ::: Incorporation Methods ::: Input Embedding Concatenation To incorporate syntactic knowledge, one simple method is to take it as part of the neural network input, denoted as Input. We represent the syntactic information with dense vectors, and concatenate it with other information like word embedding: where $\oplus $ means concatenation; $E_W$ means the original inputs of the neural model and $E_S$ means the embedding of syntax information, such as Dep/Rel or DepPath/RelPath. Approaches ::: Incorporation Methods ::: LISA BIBREF15 propose the linguistically-informed self-attention model (LISA for short) to combine SRL and dependency parsing as multi-task learning in a subtle way. Based on the multi-head self-attention model, LISA uses one attention head to predict the dependency results and it can also directly use pre-trained dependency head results to replace the attention matrix during testing. Being different from their multi-task learning, we make the replacement of one attention head during both training and testing. Instead of the original $softmax$ attention matrix, we use a one-hot matrix, generated by mapping the dependency head index of each word into a 0-1 vector of the sentence length as Figure FIGREF27 shows. We add the dependency relation information with $V$ in the replaced head so that we can make full use of the syntactic knowledge. The replaced attention head is formulated as follows: where $M_D$ is the one-hot dependency head matrix and $E_R$ means the embedding of dependency relation information, such as Rel or RelPath. Approaches ::: Incorporation Methods ::: Relation-Aware Self-Attention Relation-aware self-attention model (RelAwe for brevity) incorporates external information into the attention. By this way, the model considers the pairwise relationships between input elements, which highly agrees with the task of SRL, i.e., aiming to find the semantic relations between the candidate argument and predicate in one sentence. Compared to the standard attention, in this paper, we add the dependency information into $Q$ and $V$ in each attention head, like equation (DISPLAY_FORM15) shows: where $E_D$ and $E_R$ mean the syntactic dependency head and relation information respectively. For our multi-layer multi-head self-attention model, we make this change to each head of the first $N$ self-attention layers. Experiment ::: Settings Datasets & Evaluation Metrics Our experiments are conducted on the CoNLL-2009 shared task dataset BIBREF20. We use the official evaluation script to compare the output of different system configurations, and report the labeled precision (P), labeled recall (R) and labeled f-score (F1) for the semantic dependencies. Word Representations Most of our experiments are conducted in the closed setting without any external word embeddings or data resources than those provided by the CoNLL-2009 datasets. In the closed setting, word embedding is initialized by a Gaussian distribution with mean 0 and variance $\frac{1}{\sqrt{d}}$, where $d$ is the dimension of embedding size of each layer. For the experiments with external resources in the open setting, we utilize 1) word embeddings pre-trained with GloVe BIBREF37 on the Gigaword corpus for Chinese and the published embeddings with 100 dimensions pre-trained on Wikipedia and Gigaword for English; and 2) ELMo BIBREF38 and BERT BIBREF39, two recently proposed effective deep contextualized word representations. Other embeddings, i.e., POS embedding, linguistic knowledge embedding, and so on are initialized in same way as random word embedding no matter in closed or open setting. Syntactic Parsers In Table TABREF30, both Auto and Gold syntactic dependencies are provided by the dataset. Since the performance of the Auto is far behind the state-of-the-art BiaffineParser BIBREF40, we generate more dependency results by training BiaffineParser with different external knowledge, including pre-trained word embedding and BERT. Performance for different parsers is listed in Table TABREF30. Parameters In this work, we set word embedding size $d_w=100$, POS embedding size $d_t=50$. The predicate embedding size is set as $d_p=100$. The syntax-related embedding size varies along with different configurations, so as the feature embedding size $d_f$. To facilitate residual connections, all sub-layers in the model produce outputs of dimension $d_{model}=d_f+d_p$. The hidden dimension $d_{ff}=800$ is applied for all the experiments. We set the number of shared self-attention blocks $N=10$. The number of heads varies with $d_{model}$, but dimension of each head is 25. Besides, LISA incorporates syntax knowledge in the 5-th self-attention layer while RelAwe incorporates in the first 5 layers. We apply the similar dropout strategy as BIBREF13, i.e., the attention and residual dropout values are $0.2$ and $0.3$ respectively. The dropout is also applied in the middle layer of FFN with value $0.2$. We also employ label smoothing BIBREF41 of value $0.1$ during training. We use softmax-cross-entropy as our loss function, and use the Adadelta optimizer BIBREF42 with $\epsilon =10^{-6}$ and $\rho =0.95$. For all experiments, we train the model $200,000$ steps with learning rate $lr=1.0$, and each batch has 4096 words. All the hyper-parameters are tuned on the development set. Configurations We use different abbreviations to represent the parsing results, syntactic dependency representations, and incorporation methods. All the system configurations in our experiments are listed in Table TABREF36. Experiment ::: Quality of the Syntactic Dependencies We use the above-mentioned dependency trees of different quality for comparison, with Dep&Rel representation on our RelAwe model. In addition, we generate one more data AutoDel by deleting all the erroneous dependency heads and relations from the provided Auto data according to the gold heads and relations, and we do not replace them with any alternative heads and relations. We take this setting as another reference (along with GOLD) to indicate that erroneous syntax information may hurt the performance of the SRL model. We take the Gold as the upperbound reference of our task setting. Experiment results in Table TABREF37 demonstrate that, incorporating syntactic knowledge into the SRL model can achieve better performance and overall, the better the quality is, the better the SRL model performs. This is consistent with the previous study by BIBREF8 on the English dataset. Closer observation reveals two additional interesting phenomena. Firstly, SRL performance improvement is not proportionate to the improvement of dependency quality. When switching syntactic dependency trees from Auto to Biaffine, SRL performance improves 0.5%, although syntactic dependency improves about 8%. In contrast, the difference between Biaffine and BiaffineBert shows more significant improvement of 1.5%. The possible reason is that BiaffineBert provides key dependency information which is missing in other configurations. Secondly, the SRL performance gap between AutoDel and Auto is large though they provide the same correct syntactic information. This may indicate that incorporating erroneous syntactic knowledge hurts the SRL model, and even providing more correct dependencies cannot make up for the harm (cf. BiaffineBert). Experiment ::: Representation of the Syntactic Dependencies Apart from Dep and Rel, we also use DepPath and RelPath to encode the syntactic knowledge. In this subsection, we conduct experiments to compare different syntactic encoding in our SRL model. We base the experiments on our RelAwe model, since it is easier to incorporate different representations for comparison. When generating the RelPath, we filter the paths 1) when the dependency distance between the predicate and the candidate argument is more than 4, and 2) when the RelPath's frequency is less than 10. No matter in which representation, dependency label information is more important than the head and the combination of the two achieves better performance as our experiment results in Table TABREF41 show. Furthermore, using Biaffine dependency trees, DepPath and RelPath perform better than Dep and Rel. This is because of the capability of DepPath and RelPath to capture more structural information of the dependency trees. Comparing Table TABREF37 and TABREF41, when using gold dependencies, DepPath&RelPath can achieve much better result than Dep&Rel. But with the Auto trees, DepPath&RelPath is much worse. Therefore, structural information is much more sensitive to the quality of dependency trees due to error propagation. Experiment ::: Incorporation Methods [9]From the mechanism of LISA, we can find that the replaced attention head can't copy the syntactic dependency heads from DepPath. This subsection discusses the effectiveness of different incorporation methods of the syntactic knowledge. We take Biaffine's output as our dependency information for the comparison. Firstly, results in Table TABREF44 show that with little dependency information (Dep), LISA performs better, while incorporating richer syntactic knowledge (Dep&Rel or Dep&RelPath), three methods achieve similar performance. Overall, RelAwe achieves best results given enough syntactic knowledge. Secondly, Input and LISA achieve much better performance when we combine the dependency head information and the relation, while BIBREF15 have not introduced relation information to the LISA model and BIBREF9 have not combined the head and relation information either. Our proposed RelAwe method with DepPath&RelPath representation performs the best, which encodes the richest syntactic knowledge. Lastly, under the same settings, LISA and RelAwe perform better than Input, which indicates the importance of the location where the model incorporates the syntax, the input layer vs. the encoder layer. Experiment ::: External Resources Apart from the experiments with syntactic knowledge itself, we also compare different external resources to discover their relationship with the syntax, including pre-trained word embeddings, ELMo, and BERT. We conduct experiments with our best setting, the RelAwe model with DepPath & RelPath and the results are listed in Table TABREF45. The plain word embedding improves a little in such settings with syntactic information, while for the newly proposed Elmo and Bert, both of them can boost the models further. Experiment ::: Final Results on the Chinese Test Data Based on the above experiments and analyses, we present the overall results of our model in this subsection. We train the three models (Input, LISA, and RelAwe) with their best settings without any external knowledge as Closed, and we take the same models with Bert as Open. The DepPath&RelPath from Gold without external knowledge serves as the Gold for reference. Since we have been focusing on the task of argument identification and labeling, for both Closed and Open, we follow BIBREF22 to use existing systems' predicate senses BIBREF43 to exclude them from comparison. Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task. Experiment ::: Results on the English Data We also conduct several experiments on the English dataset to validate the effectiveness of our approaches on other languages than Chinese and the results are in Table TABREF49. Although both configurations are not exactly the same as their original papers, we tried our best to reproduce their methods on the CoNLL2009 dataset for our comparison. Overall, the results are consistent with the Chinese experiments, while the improvement is not as large as the Chinese counterparts. The RelAwe model with DepPath&RelPath still achieves the best performance. Applying our syntax-enhanced model to more languages will be an interesting research direction to work on in the future. [10]We reimplement LISA in BIBREF15 as LISA(Dep), and BIBREF9's best DepPath approach as Input(DepPath). Therefore, we can compare with their work as fairly as possible. Other settings are the best configurations for their corresponding methods. Conclusion and Future Work This paper investigates how to incorporate syntactic dependency information into semantic role labeling in depth. Firstly, we confirm that dependency trees of better quality are more helpful for the SRL task. Secondly, we present different ways to encode the trees and the experiments show that keeping more (correct) structural information during encoding improves the SRL performance. Thirdly, we compare three incorporation methods and discover that our proposed relation-aware self-attention-based model is the most effective one. Although our experiments are primarily on the Chinese dataset, the approach is largely language independent. Apart from our tentative experiments on the English dataset, applying the approach to other languages will be an interesting research direction to work on in the future.
In closed setting 84.22 F1 and in open 87.35 F1.
b13cf4205f3952c3066b9fb81bd5c4277e2bc7f5
b13cf4205f3952c3066b9fb81bd5c4277e2bc7f5_0
Q: How big is CoNLL-2009 dataset? Text: Introduction The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Since the nature of semantic roles is more abstract than the syntactic dependencies, SRL has a wide range of applications in different areas, e.g., text classification BIBREF0, text summarization BIBREF1, BIBREF2, recognizing textual entailment BIBREF3, BIBREF4, information extraction BIBREF5, question answering BIBREF6, BIBREF7, and so on. UTF8gbsn Traditionally, syntax is the bridge to reach semantics. However, along with the popularity of the end-to-end models in the NLP community, various recent studies have been discussing the necessity of syntax in the context of SRL. For instance, BIBREF8 have observed that only good syntax helps with the SRL performance. BIBREF9 have explored what kind of syntactic information or structure is better suited for the SRL model. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and claim that the syntax-agnostic model surpasses the syntax-aware ones. In this paper, we focus on analyzing the relationship between the syntactic dependency information and the SRL performance. In particular, we investigate the following four aspects: 1) Quality of the syntactic information: whether the performance of the syntactic parser output affects the SRL performance; 2) Representation of the syntactic information: how to represent the syntactic dependencies to better preserve the original structural information; 3) Incorporation of the syntactic information: at which layer of the SRL model and how to incorporate the syntactic information; and 4) the Relationship with other external resources: when we append other external resources into the SRL model, whether their contributions are orthogonal to the syntactic dependencies. For the main architecture of the SRL model, many neural-network-based models use BiLSTM as the encoder (e.g., BIBREF10, BIBREF11, BIBREF12), while recently self-attention-based encoder becomes popular due to both the effectiveness and the efficiency BIBREF13, BIBREF14, BIBREF15. By its nature, the self-attention-based model directly captures the relation between words in the sentence, which is convenient to incorporate syntactic dependency information. BIBREF15 replace one attention head with pre-trained syntactic dependency information, which can be viewed as a hard way to inject syntax into the neural model. Enlightened by the machine translation model proposed by BIBREF16, we introduce the Relation-Aware method to incorporate syntactic dependencies, which is a softer way to encode richer structural information. Various experiments for the Chinese SRL on the CoNLL-2009 dataset are conducted to evaluate our hypotheses. From the empirical results, we observe that: 1) The quality of the syntactic information is essential when we incorporate structural information into the SRL model; 2) Deeper integration of the syntactic information achieves better results than the simple concatenation to the inputs; 3) External pre-trained contextualized word representations help to boost the SRL performance further, which is not entirely overlapping with the syntactic information. In summary, the contributions of our work are: We present detailed experiments on different aspects of incorporating syntactic information into the SRL model, in what quality, in which representation and how to integrate. We introduce the relation-aware approach to employ syntactic dependencies into the self-attention-based SRL model. We compare our approach with previous studies, and achieve state-of-the-art results with and without external resources, i.e., in the so-called closed and open settings. Related work Traditional semantic role labeling task BIBREF17 presumes that the syntactic structure of the sentence is given, either being a constituent tree or a dependency tree, like in the CoNLL shared tasks BIBREF18, BIBREF19, BIBREF20. Recent neural-network-based approaches can be roughly categorized into two classes: 1) making use of the syntactic information BIBREF21, BIBREF22, BIBREF23, BIBREF24, and 2) pure end-to-end learning from tokens to semantic labels, e.g., BIBREF25, BIBREF26. BIBREF22 utilize an LSTM model to obtain embeddings from the syntactic dependency paths; while BIBREF24 construct Graph Convolutional Networks to encode the dependency structure. Although BIBREF8's approach is a pure end-to-end learning, they have included an analysis of adding syntactic dependency information into English SRL in the discussion section. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and BIBREF9 have compared different ways to represent and encode the syntactic knowledge. In another line of research, BIBREF14 utilize the Transformer network for the encoder instead of the BiLSTM. BIBREF15 present a novel and effective multi-head self-attention model to incorporate syntax, which is called LISA (Linguistically-Informed Self-Attention). We follow their approach of replacing one attention head with the dependency head information, but use a softer way to capture the pairwise relationship between input elements BIBREF16. For the datasets and annotations of the SRL task, most of the previous research focuses on 1) PropBank BIBREF27 and NomBank BIBREF28 annotations, i.e., the CoNLL 2005 BIBREF18 and CoNLL 2009 BIBREF20 shared tasks; 2) OntoNotes annotations BIBREF29, i.e., the CoNLL 2005 and CoNLL 2012 datasets and more; 3) and FrameNet BIBREF30 annotations. For the non-English languages, not all of them are widely available. Apart from these, in the broad range of semantic processing, other formalisms non-exhaustively include abstract meaning representation BIBREF31, universal decompositional semantics BIBREF32, and semantic dependency parsing BIBREF33. BIBREF34 give a better overview of various semantic representations. In this paper, we primarily work on the Chinese and English datasets from the CoNLL-2009 shared task and focus on the effectiveness of incorporating syntax into the Chinese SRL task. Approaches In this section, we first introduce the basic architecture of our self-attention-based SRL model, and then present two different ways to encode the syntactic dependency information. Afterwards, we compare three approaches to incorporate the syntax into the base model, concatenation to the input embedding, LISA, and our proposed relation-aware method. Approaches ::: The Basic Architecture Our basic model is a multi-head self-attention-based model, which is effective in SRL task as previous work proves BIBREF35. The model consists of three layers: the input layer, the encoder layer and the prediction layer as shown in Figure FIGREF5. Approaches ::: The Basic Architecture ::: Input Layer The input layer contains three types of embeddings: token embedding, predicate embedding, and positional embedding. Token Embedding includes word embedding, part-of-speech (POS) tag embedding. Predicate Embedding has been proposed by BIBREF8, and its binary embedding is used to indicate the predicates indices in each sentence. Positional Embedding encodes the order of the input word sequence. We follow BIBREF13 to use time positional embedding, which is formulated as follows: where $t$ is the position, $i$ means the dimension, and $d$ is the dimension of the model input embedding. Approaches ::: The Basic Architecture ::: Encoder Layer The self-attention block is almost the same as Transformer encoder proposed by BIBREF13. Specifically the Transformer encoder contains a feed-forward network (FFN) and a multi-head attention network. The former is followed by the latter. In this work, we exchange their order, so that the multi-head attention module is moved behind the FFN module as Figure FIGREF5 shows. FFN The FFN module consists of two affine layers with a ReLU activation in the middle. Formally, we have the following equation: Multi-Head Attention The basic attention mechanism used in the multi-head attention function is called “Scaled Dot-Product Attention”, which is formulated as follows: where $Q$ is queries, $K$ is keys, and $V$ is values. In the multi-head attention setting, it first maps the input matrix $X$ into queries, keys and values matrices by using $h$ different learned linear projections. Taking queries $Q$ as an example: where $0 \le i < h$. Keys and values use similar projections. On each of these projections, we perform the scaled dot-product attention in parallel. These parallel output values are concatenated and once again projected into the final values. Equation DISPLAY_FORM14 depicts the above operations. where More details about multi-head attention can be found in BIBREF13. Add & Norm We employ a residual connection to each module, followed by a layer normalization BIBREF36 operation. The output of each module is formulated as where $f(x)$ is implemented by each above module. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Head & Relation The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short. Except for LISA, where Dep is a one-hot matrix of dependency head word index described in SECREF25, in other cases, we use the corresponding head word. Rel is the dependency relation between the word and its syntactic head. We take both Dep and Rel as common strings and map them into dense vectors in the similar way of word embedding. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Path & Relation Path In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath). To generate DepPath & RelPath between candidate argument and predicate, we firstly find their lowest common ancestor. Then we get two sub-paths, one is from the ancestor to the predicate and the other is from the ancestor to the argument. For DepPath, we compute distance from ancestor to predicate and argument respectively and then concatenate two distances with the separator `,'. For RelPath, we concatenate the labels appearing in each sub-path with the separator “_" respectively to get two label paths, and then concatenate the two label paths with the separator `,'. UTF8gbsn As shown in Figure FIGREF21, the lowest common ancestor of the predicate “鼓励 (encourage)" and the candidate argument “农业 (agriculture)" is “鼓励 (encourage)", so their DepPath is “2,0" and its RelPath is “COMP_COMP,". We take both DepPath and RelPath as common strings and map them into dense vectors in the similar way of Dep and Rel. UTF8gbsn Approaches ::: Incorporation Methods ::: Input Embedding Concatenation To incorporate syntactic knowledge, one simple method is to take it as part of the neural network input, denoted as Input. We represent the syntactic information with dense vectors, and concatenate it with other information like word embedding: where $\oplus $ means concatenation; $E_W$ means the original inputs of the neural model and $E_S$ means the embedding of syntax information, such as Dep/Rel or DepPath/RelPath. Approaches ::: Incorporation Methods ::: LISA BIBREF15 propose the linguistically-informed self-attention model (LISA for short) to combine SRL and dependency parsing as multi-task learning in a subtle way. Based on the multi-head self-attention model, LISA uses one attention head to predict the dependency results and it can also directly use pre-trained dependency head results to replace the attention matrix during testing. Being different from their multi-task learning, we make the replacement of one attention head during both training and testing. Instead of the original $softmax$ attention matrix, we use a one-hot matrix, generated by mapping the dependency head index of each word into a 0-1 vector of the sentence length as Figure FIGREF27 shows. We add the dependency relation information with $V$ in the replaced head so that we can make full use of the syntactic knowledge. The replaced attention head is formulated as follows: where $M_D$ is the one-hot dependency head matrix and $E_R$ means the embedding of dependency relation information, such as Rel or RelPath. Approaches ::: Incorporation Methods ::: Relation-Aware Self-Attention Relation-aware self-attention model (RelAwe for brevity) incorporates external information into the attention. By this way, the model considers the pairwise relationships between input elements, which highly agrees with the task of SRL, i.e., aiming to find the semantic relations between the candidate argument and predicate in one sentence. Compared to the standard attention, in this paper, we add the dependency information into $Q$ and $V$ in each attention head, like equation (DISPLAY_FORM15) shows: where $E_D$ and $E_R$ mean the syntactic dependency head and relation information respectively. For our multi-layer multi-head self-attention model, we make this change to each head of the first $N$ self-attention layers. Experiment ::: Settings Datasets & Evaluation Metrics Our experiments are conducted on the CoNLL-2009 shared task dataset BIBREF20. We use the official evaluation script to compare the output of different system configurations, and report the labeled precision (P), labeled recall (R) and labeled f-score (F1) for the semantic dependencies. Word Representations Most of our experiments are conducted in the closed setting without any external word embeddings or data resources than those provided by the CoNLL-2009 datasets. In the closed setting, word embedding is initialized by a Gaussian distribution with mean 0 and variance $\frac{1}{\sqrt{d}}$, where $d$ is the dimension of embedding size of each layer. For the experiments with external resources in the open setting, we utilize 1) word embeddings pre-trained with GloVe BIBREF37 on the Gigaword corpus for Chinese and the published embeddings with 100 dimensions pre-trained on Wikipedia and Gigaword for English; and 2) ELMo BIBREF38 and BERT BIBREF39, two recently proposed effective deep contextualized word representations. Other embeddings, i.e., POS embedding, linguistic knowledge embedding, and so on are initialized in same way as random word embedding no matter in closed or open setting. Syntactic Parsers In Table TABREF30, both Auto and Gold syntactic dependencies are provided by the dataset. Since the performance of the Auto is far behind the state-of-the-art BiaffineParser BIBREF40, we generate more dependency results by training BiaffineParser with different external knowledge, including pre-trained word embedding and BERT. Performance for different parsers is listed in Table TABREF30. Parameters In this work, we set word embedding size $d_w=100$, POS embedding size $d_t=50$. The predicate embedding size is set as $d_p=100$. The syntax-related embedding size varies along with different configurations, so as the feature embedding size $d_f$. To facilitate residual connections, all sub-layers in the model produce outputs of dimension $d_{model}=d_f+d_p$. The hidden dimension $d_{ff}=800$ is applied for all the experiments. We set the number of shared self-attention blocks $N=10$. The number of heads varies with $d_{model}$, but dimension of each head is 25. Besides, LISA incorporates syntax knowledge in the 5-th self-attention layer while RelAwe incorporates in the first 5 layers. We apply the similar dropout strategy as BIBREF13, i.e., the attention and residual dropout values are $0.2$ and $0.3$ respectively. The dropout is also applied in the middle layer of FFN with value $0.2$. We also employ label smoothing BIBREF41 of value $0.1$ during training. We use softmax-cross-entropy as our loss function, and use the Adadelta optimizer BIBREF42 with $\epsilon =10^{-6}$ and $\rho =0.95$. For all experiments, we train the model $200,000$ steps with learning rate $lr=1.0$, and each batch has 4096 words. All the hyper-parameters are tuned on the development set. Configurations We use different abbreviations to represent the parsing results, syntactic dependency representations, and incorporation methods. All the system configurations in our experiments are listed in Table TABREF36. Experiment ::: Quality of the Syntactic Dependencies We use the above-mentioned dependency trees of different quality for comparison, with Dep&Rel representation on our RelAwe model. In addition, we generate one more data AutoDel by deleting all the erroneous dependency heads and relations from the provided Auto data according to the gold heads and relations, and we do not replace them with any alternative heads and relations. We take this setting as another reference (along with GOLD) to indicate that erroneous syntax information may hurt the performance of the SRL model. We take the Gold as the upperbound reference of our task setting. Experiment results in Table TABREF37 demonstrate that, incorporating syntactic knowledge into the SRL model can achieve better performance and overall, the better the quality is, the better the SRL model performs. This is consistent with the previous study by BIBREF8 on the English dataset. Closer observation reveals two additional interesting phenomena. Firstly, SRL performance improvement is not proportionate to the improvement of dependency quality. When switching syntactic dependency trees from Auto to Biaffine, SRL performance improves 0.5%, although syntactic dependency improves about 8%. In contrast, the difference between Biaffine and BiaffineBert shows more significant improvement of 1.5%. The possible reason is that BiaffineBert provides key dependency information which is missing in other configurations. Secondly, the SRL performance gap between AutoDel and Auto is large though they provide the same correct syntactic information. This may indicate that incorporating erroneous syntactic knowledge hurts the SRL model, and even providing more correct dependencies cannot make up for the harm (cf. BiaffineBert). Experiment ::: Representation of the Syntactic Dependencies Apart from Dep and Rel, we also use DepPath and RelPath to encode the syntactic knowledge. In this subsection, we conduct experiments to compare different syntactic encoding in our SRL model. We base the experiments on our RelAwe model, since it is easier to incorporate different representations for comparison. When generating the RelPath, we filter the paths 1) when the dependency distance between the predicate and the candidate argument is more than 4, and 2) when the RelPath's frequency is less than 10. No matter in which representation, dependency label information is more important than the head and the combination of the two achieves better performance as our experiment results in Table TABREF41 show. Furthermore, using Biaffine dependency trees, DepPath and RelPath perform better than Dep and Rel. This is because of the capability of DepPath and RelPath to capture more structural information of the dependency trees. Comparing Table TABREF37 and TABREF41, when using gold dependencies, DepPath&RelPath can achieve much better result than Dep&Rel. But with the Auto trees, DepPath&RelPath is much worse. Therefore, structural information is much more sensitive to the quality of dependency trees due to error propagation. Experiment ::: Incorporation Methods [9]From the mechanism of LISA, we can find that the replaced attention head can't copy the syntactic dependency heads from DepPath. This subsection discusses the effectiveness of different incorporation methods of the syntactic knowledge. We take Biaffine's output as our dependency information for the comparison. Firstly, results in Table TABREF44 show that with little dependency information (Dep), LISA performs better, while incorporating richer syntactic knowledge (Dep&Rel or Dep&RelPath), three methods achieve similar performance. Overall, RelAwe achieves best results given enough syntactic knowledge. Secondly, Input and LISA achieve much better performance when we combine the dependency head information and the relation, while BIBREF15 have not introduced relation information to the LISA model and BIBREF9 have not combined the head and relation information either. Our proposed RelAwe method with DepPath&RelPath representation performs the best, which encodes the richest syntactic knowledge. Lastly, under the same settings, LISA and RelAwe perform better than Input, which indicates the importance of the location where the model incorporates the syntax, the input layer vs. the encoder layer. Experiment ::: External Resources Apart from the experiments with syntactic knowledge itself, we also compare different external resources to discover their relationship with the syntax, including pre-trained word embeddings, ELMo, and BERT. We conduct experiments with our best setting, the RelAwe model with DepPath & RelPath and the results are listed in Table TABREF45. The plain word embedding improves a little in such settings with syntactic information, while for the newly proposed Elmo and Bert, both of them can boost the models further. Experiment ::: Final Results on the Chinese Test Data Based on the above experiments and analyses, we present the overall results of our model in this subsection. We train the three models (Input, LISA, and RelAwe) with their best settings without any external knowledge as Closed, and we take the same models with Bert as Open. The DepPath&RelPath from Gold without external knowledge serves as the Gold for reference. Since we have been focusing on the task of argument identification and labeling, for both Closed and Open, we follow BIBREF22 to use existing systems' predicate senses BIBREF43 to exclude them from comparison. Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task. Experiment ::: Results on the English Data We also conduct several experiments on the English dataset to validate the effectiveness of our approaches on other languages than Chinese and the results are in Table TABREF49. Although both configurations are not exactly the same as their original papers, we tried our best to reproduce their methods on the CoNLL2009 dataset for our comparison. Overall, the results are consistent with the Chinese experiments, while the improvement is not as large as the Chinese counterparts. The RelAwe model with DepPath&RelPath still achieves the best performance. Applying our syntax-enhanced model to more languages will be an interesting research direction to work on in the future. [10]We reimplement LISA in BIBREF15 as LISA(Dep), and BIBREF9's best DepPath approach as Input(DepPath). Therefore, we can compare with their work as fairly as possible. Other settings are the best configurations for their corresponding methods. Conclusion and Future Work This paper investigates how to incorporate syntactic dependency information into semantic role labeling in depth. Firstly, we confirm that dependency trees of better quality are more helpful for the SRL task. Secondly, we present different ways to encode the trees and the experiments show that keeping more (correct) structural information during encoding improves the SRL performance. Thirdly, we compare three incorporation methods and discover that our proposed relation-aware self-attention-based model is the most effective one. Although our experiments are primarily on the Chinese dataset, the approach is largely language independent. Apart from our tentative experiments on the English dataset, applying the approach to other languages will be an interesting research direction to work on in the future.
Unanswerable
86f24ecc89e743bb1534ac160d08859493afafe9
86f24ecc89e743bb1534ac160d08859493afafe9_0
Q: What different approaches of encoding syntactic information authors present? Text: Introduction The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Since the nature of semantic roles is more abstract than the syntactic dependencies, SRL has a wide range of applications in different areas, e.g., text classification BIBREF0, text summarization BIBREF1, BIBREF2, recognizing textual entailment BIBREF3, BIBREF4, information extraction BIBREF5, question answering BIBREF6, BIBREF7, and so on. UTF8gbsn Traditionally, syntax is the bridge to reach semantics. However, along with the popularity of the end-to-end models in the NLP community, various recent studies have been discussing the necessity of syntax in the context of SRL. For instance, BIBREF8 have observed that only good syntax helps with the SRL performance. BIBREF9 have explored what kind of syntactic information or structure is better suited for the SRL model. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and claim that the syntax-agnostic model surpasses the syntax-aware ones. In this paper, we focus on analyzing the relationship between the syntactic dependency information and the SRL performance. In particular, we investigate the following four aspects: 1) Quality of the syntactic information: whether the performance of the syntactic parser output affects the SRL performance; 2) Representation of the syntactic information: how to represent the syntactic dependencies to better preserve the original structural information; 3) Incorporation of the syntactic information: at which layer of the SRL model and how to incorporate the syntactic information; and 4) the Relationship with other external resources: when we append other external resources into the SRL model, whether their contributions are orthogonal to the syntactic dependencies. For the main architecture of the SRL model, many neural-network-based models use BiLSTM as the encoder (e.g., BIBREF10, BIBREF11, BIBREF12), while recently self-attention-based encoder becomes popular due to both the effectiveness and the efficiency BIBREF13, BIBREF14, BIBREF15. By its nature, the self-attention-based model directly captures the relation between words in the sentence, which is convenient to incorporate syntactic dependency information. BIBREF15 replace one attention head with pre-trained syntactic dependency information, which can be viewed as a hard way to inject syntax into the neural model. Enlightened by the machine translation model proposed by BIBREF16, we introduce the Relation-Aware method to incorporate syntactic dependencies, which is a softer way to encode richer structural information. Various experiments for the Chinese SRL on the CoNLL-2009 dataset are conducted to evaluate our hypotheses. From the empirical results, we observe that: 1) The quality of the syntactic information is essential when we incorporate structural information into the SRL model; 2) Deeper integration of the syntactic information achieves better results than the simple concatenation to the inputs; 3) External pre-trained contextualized word representations help to boost the SRL performance further, which is not entirely overlapping with the syntactic information. In summary, the contributions of our work are: We present detailed experiments on different aspects of incorporating syntactic information into the SRL model, in what quality, in which representation and how to integrate. We introduce the relation-aware approach to employ syntactic dependencies into the self-attention-based SRL model. We compare our approach with previous studies, and achieve state-of-the-art results with and without external resources, i.e., in the so-called closed and open settings. Related work Traditional semantic role labeling task BIBREF17 presumes that the syntactic structure of the sentence is given, either being a constituent tree or a dependency tree, like in the CoNLL shared tasks BIBREF18, BIBREF19, BIBREF20. Recent neural-network-based approaches can be roughly categorized into two classes: 1) making use of the syntactic information BIBREF21, BIBREF22, BIBREF23, BIBREF24, and 2) pure end-to-end learning from tokens to semantic labels, e.g., BIBREF25, BIBREF26. BIBREF22 utilize an LSTM model to obtain embeddings from the syntactic dependency paths; while BIBREF24 construct Graph Convolutional Networks to encode the dependency structure. Although BIBREF8's approach is a pure end-to-end learning, they have included an analysis of adding syntactic dependency information into English SRL in the discussion section. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and BIBREF9 have compared different ways to represent and encode the syntactic knowledge. In another line of research, BIBREF14 utilize the Transformer network for the encoder instead of the BiLSTM. BIBREF15 present a novel and effective multi-head self-attention model to incorporate syntax, which is called LISA (Linguistically-Informed Self-Attention). We follow their approach of replacing one attention head with the dependency head information, but use a softer way to capture the pairwise relationship between input elements BIBREF16. For the datasets and annotations of the SRL task, most of the previous research focuses on 1) PropBank BIBREF27 and NomBank BIBREF28 annotations, i.e., the CoNLL 2005 BIBREF18 and CoNLL 2009 BIBREF20 shared tasks; 2) OntoNotes annotations BIBREF29, i.e., the CoNLL 2005 and CoNLL 2012 datasets and more; 3) and FrameNet BIBREF30 annotations. For the non-English languages, not all of them are widely available. Apart from these, in the broad range of semantic processing, other formalisms non-exhaustively include abstract meaning representation BIBREF31, universal decompositional semantics BIBREF32, and semantic dependency parsing BIBREF33. BIBREF34 give a better overview of various semantic representations. In this paper, we primarily work on the Chinese and English datasets from the CoNLL-2009 shared task and focus on the effectiveness of incorporating syntax into the Chinese SRL task. Approaches In this section, we first introduce the basic architecture of our self-attention-based SRL model, and then present two different ways to encode the syntactic dependency information. Afterwards, we compare three approaches to incorporate the syntax into the base model, concatenation to the input embedding, LISA, and our proposed relation-aware method. Approaches ::: The Basic Architecture Our basic model is a multi-head self-attention-based model, which is effective in SRL task as previous work proves BIBREF35. The model consists of three layers: the input layer, the encoder layer and the prediction layer as shown in Figure FIGREF5. Approaches ::: The Basic Architecture ::: Input Layer The input layer contains three types of embeddings: token embedding, predicate embedding, and positional embedding. Token Embedding includes word embedding, part-of-speech (POS) tag embedding. Predicate Embedding has been proposed by BIBREF8, and its binary embedding is used to indicate the predicates indices in each sentence. Positional Embedding encodes the order of the input word sequence. We follow BIBREF13 to use time positional embedding, which is formulated as follows: where $t$ is the position, $i$ means the dimension, and $d$ is the dimension of the model input embedding. Approaches ::: The Basic Architecture ::: Encoder Layer The self-attention block is almost the same as Transformer encoder proposed by BIBREF13. Specifically the Transformer encoder contains a feed-forward network (FFN) and a multi-head attention network. The former is followed by the latter. In this work, we exchange their order, so that the multi-head attention module is moved behind the FFN module as Figure FIGREF5 shows. FFN The FFN module consists of two affine layers with a ReLU activation in the middle. Formally, we have the following equation: Multi-Head Attention The basic attention mechanism used in the multi-head attention function is called “Scaled Dot-Product Attention”, which is formulated as follows: where $Q$ is queries, $K$ is keys, and $V$ is values. In the multi-head attention setting, it first maps the input matrix $X$ into queries, keys and values matrices by using $h$ different learned linear projections. Taking queries $Q$ as an example: where $0 \le i < h$. Keys and values use similar projections. On each of these projections, we perform the scaled dot-product attention in parallel. These parallel output values are concatenated and once again projected into the final values. Equation DISPLAY_FORM14 depicts the above operations. where More details about multi-head attention can be found in BIBREF13. Add & Norm We employ a residual connection to each module, followed by a layer normalization BIBREF36 operation. The output of each module is formulated as where $f(x)$ is implemented by each above module. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Head & Relation The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short. Except for LISA, where Dep is a one-hot matrix of dependency head word index described in SECREF25, in other cases, we use the corresponding head word. Rel is the dependency relation between the word and its syntactic head. We take both Dep and Rel as common strings and map them into dense vectors in the similar way of word embedding. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Path & Relation Path In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath). To generate DepPath & RelPath between candidate argument and predicate, we firstly find their lowest common ancestor. Then we get two sub-paths, one is from the ancestor to the predicate and the other is from the ancestor to the argument. For DepPath, we compute distance from ancestor to predicate and argument respectively and then concatenate two distances with the separator `,'. For RelPath, we concatenate the labels appearing in each sub-path with the separator “_" respectively to get two label paths, and then concatenate the two label paths with the separator `,'. UTF8gbsn As shown in Figure FIGREF21, the lowest common ancestor of the predicate “鼓励 (encourage)" and the candidate argument “农业 (agriculture)" is “鼓励 (encourage)", so their DepPath is “2,0" and its RelPath is “COMP_COMP,". We take both DepPath and RelPath as common strings and map them into dense vectors in the similar way of Dep and Rel. UTF8gbsn Approaches ::: Incorporation Methods ::: Input Embedding Concatenation To incorporate syntactic knowledge, one simple method is to take it as part of the neural network input, denoted as Input. We represent the syntactic information with dense vectors, and concatenate it with other information like word embedding: where $\oplus $ means concatenation; $E_W$ means the original inputs of the neural model and $E_S$ means the embedding of syntax information, such as Dep/Rel or DepPath/RelPath. Approaches ::: Incorporation Methods ::: LISA BIBREF15 propose the linguistically-informed self-attention model (LISA for short) to combine SRL and dependency parsing as multi-task learning in a subtle way. Based on the multi-head self-attention model, LISA uses one attention head to predict the dependency results and it can also directly use pre-trained dependency head results to replace the attention matrix during testing. Being different from their multi-task learning, we make the replacement of one attention head during both training and testing. Instead of the original $softmax$ attention matrix, we use a one-hot matrix, generated by mapping the dependency head index of each word into a 0-1 vector of the sentence length as Figure FIGREF27 shows. We add the dependency relation information with $V$ in the replaced head so that we can make full use of the syntactic knowledge. The replaced attention head is formulated as follows: where $M_D$ is the one-hot dependency head matrix and $E_R$ means the embedding of dependency relation information, such as Rel or RelPath. Approaches ::: Incorporation Methods ::: Relation-Aware Self-Attention Relation-aware self-attention model (RelAwe for brevity) incorporates external information into the attention. By this way, the model considers the pairwise relationships between input elements, which highly agrees with the task of SRL, i.e., aiming to find the semantic relations between the candidate argument and predicate in one sentence. Compared to the standard attention, in this paper, we add the dependency information into $Q$ and $V$ in each attention head, like equation (DISPLAY_FORM15) shows: where $E_D$ and $E_R$ mean the syntactic dependency head and relation information respectively. For our multi-layer multi-head self-attention model, we make this change to each head of the first $N$ self-attention layers. Experiment ::: Settings Datasets & Evaluation Metrics Our experiments are conducted on the CoNLL-2009 shared task dataset BIBREF20. We use the official evaluation script to compare the output of different system configurations, and report the labeled precision (P), labeled recall (R) and labeled f-score (F1) for the semantic dependencies. Word Representations Most of our experiments are conducted in the closed setting without any external word embeddings or data resources than those provided by the CoNLL-2009 datasets. In the closed setting, word embedding is initialized by a Gaussian distribution with mean 0 and variance $\frac{1}{\sqrt{d}}$, where $d$ is the dimension of embedding size of each layer. For the experiments with external resources in the open setting, we utilize 1) word embeddings pre-trained with GloVe BIBREF37 on the Gigaword corpus for Chinese and the published embeddings with 100 dimensions pre-trained on Wikipedia and Gigaword for English; and 2) ELMo BIBREF38 and BERT BIBREF39, two recently proposed effective deep contextualized word representations. Other embeddings, i.e., POS embedding, linguistic knowledge embedding, and so on are initialized in same way as random word embedding no matter in closed or open setting. Syntactic Parsers In Table TABREF30, both Auto and Gold syntactic dependencies are provided by the dataset. Since the performance of the Auto is far behind the state-of-the-art BiaffineParser BIBREF40, we generate more dependency results by training BiaffineParser with different external knowledge, including pre-trained word embedding and BERT. Performance for different parsers is listed in Table TABREF30. Parameters In this work, we set word embedding size $d_w=100$, POS embedding size $d_t=50$. The predicate embedding size is set as $d_p=100$. The syntax-related embedding size varies along with different configurations, so as the feature embedding size $d_f$. To facilitate residual connections, all sub-layers in the model produce outputs of dimension $d_{model}=d_f+d_p$. The hidden dimension $d_{ff}=800$ is applied for all the experiments. We set the number of shared self-attention blocks $N=10$. The number of heads varies with $d_{model}$, but dimension of each head is 25. Besides, LISA incorporates syntax knowledge in the 5-th self-attention layer while RelAwe incorporates in the first 5 layers. We apply the similar dropout strategy as BIBREF13, i.e., the attention and residual dropout values are $0.2$ and $0.3$ respectively. The dropout is also applied in the middle layer of FFN with value $0.2$. We also employ label smoothing BIBREF41 of value $0.1$ during training. We use softmax-cross-entropy as our loss function, and use the Adadelta optimizer BIBREF42 with $\epsilon =10^{-6}$ and $\rho =0.95$. For all experiments, we train the model $200,000$ steps with learning rate $lr=1.0$, and each batch has 4096 words. All the hyper-parameters are tuned on the development set. Configurations We use different abbreviations to represent the parsing results, syntactic dependency representations, and incorporation methods. All the system configurations in our experiments are listed in Table TABREF36. Experiment ::: Quality of the Syntactic Dependencies We use the above-mentioned dependency trees of different quality for comparison, with Dep&Rel representation on our RelAwe model. In addition, we generate one more data AutoDel by deleting all the erroneous dependency heads and relations from the provided Auto data according to the gold heads and relations, and we do not replace them with any alternative heads and relations. We take this setting as another reference (along with GOLD) to indicate that erroneous syntax information may hurt the performance of the SRL model. We take the Gold as the upperbound reference of our task setting. Experiment results in Table TABREF37 demonstrate that, incorporating syntactic knowledge into the SRL model can achieve better performance and overall, the better the quality is, the better the SRL model performs. This is consistent with the previous study by BIBREF8 on the English dataset. Closer observation reveals two additional interesting phenomena. Firstly, SRL performance improvement is not proportionate to the improvement of dependency quality. When switching syntactic dependency trees from Auto to Biaffine, SRL performance improves 0.5%, although syntactic dependency improves about 8%. In contrast, the difference between Biaffine and BiaffineBert shows more significant improvement of 1.5%. The possible reason is that BiaffineBert provides key dependency information which is missing in other configurations. Secondly, the SRL performance gap between AutoDel and Auto is large though they provide the same correct syntactic information. This may indicate that incorporating erroneous syntactic knowledge hurts the SRL model, and even providing more correct dependencies cannot make up for the harm (cf. BiaffineBert). Experiment ::: Representation of the Syntactic Dependencies Apart from Dep and Rel, we also use DepPath and RelPath to encode the syntactic knowledge. In this subsection, we conduct experiments to compare different syntactic encoding in our SRL model. We base the experiments on our RelAwe model, since it is easier to incorporate different representations for comparison. When generating the RelPath, we filter the paths 1) when the dependency distance between the predicate and the candidate argument is more than 4, and 2) when the RelPath's frequency is less than 10. No matter in which representation, dependency label information is more important than the head and the combination of the two achieves better performance as our experiment results in Table TABREF41 show. Furthermore, using Biaffine dependency trees, DepPath and RelPath perform better than Dep and Rel. This is because of the capability of DepPath and RelPath to capture more structural information of the dependency trees. Comparing Table TABREF37 and TABREF41, when using gold dependencies, DepPath&RelPath can achieve much better result than Dep&Rel. But with the Auto trees, DepPath&RelPath is much worse. Therefore, structural information is much more sensitive to the quality of dependency trees due to error propagation. Experiment ::: Incorporation Methods [9]From the mechanism of LISA, we can find that the replaced attention head can't copy the syntactic dependency heads from DepPath. This subsection discusses the effectiveness of different incorporation methods of the syntactic knowledge. We take Biaffine's output as our dependency information for the comparison. Firstly, results in Table TABREF44 show that with little dependency information (Dep), LISA performs better, while incorporating richer syntactic knowledge (Dep&Rel or Dep&RelPath), three methods achieve similar performance. Overall, RelAwe achieves best results given enough syntactic knowledge. Secondly, Input and LISA achieve much better performance when we combine the dependency head information and the relation, while BIBREF15 have not introduced relation information to the LISA model and BIBREF9 have not combined the head and relation information either. Our proposed RelAwe method with DepPath&RelPath representation performs the best, which encodes the richest syntactic knowledge. Lastly, under the same settings, LISA and RelAwe perform better than Input, which indicates the importance of the location where the model incorporates the syntax, the input layer vs. the encoder layer. Experiment ::: External Resources Apart from the experiments with syntactic knowledge itself, we also compare different external resources to discover their relationship with the syntax, including pre-trained word embeddings, ELMo, and BERT. We conduct experiments with our best setting, the RelAwe model with DepPath & RelPath and the results are listed in Table TABREF45. The plain word embedding improves a little in such settings with syntactic information, while for the newly proposed Elmo and Bert, both of them can boost the models further. Experiment ::: Final Results on the Chinese Test Data Based on the above experiments and analyses, we present the overall results of our model in this subsection. We train the three models (Input, LISA, and RelAwe) with their best settings without any external knowledge as Closed, and we take the same models with Bert as Open. The DepPath&RelPath from Gold without external knowledge serves as the Gold for reference. Since we have been focusing on the task of argument identification and labeling, for both Closed and Open, we follow BIBREF22 to use existing systems' predicate senses BIBREF43 to exclude them from comparison. Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task. Experiment ::: Results on the English Data We also conduct several experiments on the English dataset to validate the effectiveness of our approaches on other languages than Chinese and the results are in Table TABREF49. Although both configurations are not exactly the same as their original papers, we tried our best to reproduce their methods on the CoNLL2009 dataset for our comparison. Overall, the results are consistent with the Chinese experiments, while the improvement is not as large as the Chinese counterparts. The RelAwe model with DepPath&RelPath still achieves the best performance. Applying our syntax-enhanced model to more languages will be an interesting research direction to work on in the future. [10]We reimplement LISA in BIBREF15 as LISA(Dep), and BIBREF9's best DepPath approach as Input(DepPath). Therefore, we can compare with their work as fairly as possible. Other settings are the best configurations for their corresponding methods. Conclusion and Future Work This paper investigates how to incorporate syntactic dependency information into semantic role labeling in depth. Firstly, we confirm that dependency trees of better quality are more helpful for the SRL task. Secondly, we present different ways to encode the trees and the experiments show that keeping more (correct) structural information during encoding improves the SRL performance. Thirdly, we compare three incorporation methods and discover that our proposed relation-aware self-attention-based model is the most effective one. Although our experiments are primarily on the Chinese dataset, the approach is largely language independent. Apart from our tentative experiments on the English dataset, applying the approach to other languages will be an interesting research direction to work on in the future.
dependency head and dependency relation label, denoted as Dep and Rel for short, Tree-based Position Feature (TPF) as Dependency Path (DepPath), Shortest Dependency Path (SDP) as Relation Path (RelPath)
bab8c69e183bae6e30fc362009db9b46e720225e
bab8c69e183bae6e30fc362009db9b46e720225e_0
Q: What are two strong baseline methods authors refer to? Text: Introduction The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Since the nature of semantic roles is more abstract than the syntactic dependencies, SRL has a wide range of applications in different areas, e.g., text classification BIBREF0, text summarization BIBREF1, BIBREF2, recognizing textual entailment BIBREF3, BIBREF4, information extraction BIBREF5, question answering BIBREF6, BIBREF7, and so on. UTF8gbsn Traditionally, syntax is the bridge to reach semantics. However, along with the popularity of the end-to-end models in the NLP community, various recent studies have been discussing the necessity of syntax in the context of SRL. For instance, BIBREF8 have observed that only good syntax helps with the SRL performance. BIBREF9 have explored what kind of syntactic information or structure is better suited for the SRL model. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and claim that the syntax-agnostic model surpasses the syntax-aware ones. In this paper, we focus on analyzing the relationship between the syntactic dependency information and the SRL performance. In particular, we investigate the following four aspects: 1) Quality of the syntactic information: whether the performance of the syntactic parser output affects the SRL performance; 2) Representation of the syntactic information: how to represent the syntactic dependencies to better preserve the original structural information; 3) Incorporation of the syntactic information: at which layer of the SRL model and how to incorporate the syntactic information; and 4) the Relationship with other external resources: when we append other external resources into the SRL model, whether their contributions are orthogonal to the syntactic dependencies. For the main architecture of the SRL model, many neural-network-based models use BiLSTM as the encoder (e.g., BIBREF10, BIBREF11, BIBREF12), while recently self-attention-based encoder becomes popular due to both the effectiveness and the efficiency BIBREF13, BIBREF14, BIBREF15. By its nature, the self-attention-based model directly captures the relation between words in the sentence, which is convenient to incorporate syntactic dependency information. BIBREF15 replace one attention head with pre-trained syntactic dependency information, which can be viewed as a hard way to inject syntax into the neural model. Enlightened by the machine translation model proposed by BIBREF16, we introduce the Relation-Aware method to incorporate syntactic dependencies, which is a softer way to encode richer structural information. Various experiments for the Chinese SRL on the CoNLL-2009 dataset are conducted to evaluate our hypotheses. From the empirical results, we observe that: 1) The quality of the syntactic information is essential when we incorporate structural information into the SRL model; 2) Deeper integration of the syntactic information achieves better results than the simple concatenation to the inputs; 3) External pre-trained contextualized word representations help to boost the SRL performance further, which is not entirely overlapping with the syntactic information. In summary, the contributions of our work are: We present detailed experiments on different aspects of incorporating syntactic information into the SRL model, in what quality, in which representation and how to integrate. We introduce the relation-aware approach to employ syntactic dependencies into the self-attention-based SRL model. We compare our approach with previous studies, and achieve state-of-the-art results with and without external resources, i.e., in the so-called closed and open settings. Related work Traditional semantic role labeling task BIBREF17 presumes that the syntactic structure of the sentence is given, either being a constituent tree or a dependency tree, like in the CoNLL shared tasks BIBREF18, BIBREF19, BIBREF20. Recent neural-network-based approaches can be roughly categorized into two classes: 1) making use of the syntactic information BIBREF21, BIBREF22, BIBREF23, BIBREF24, and 2) pure end-to-end learning from tokens to semantic labels, e.g., BIBREF25, BIBREF26. BIBREF22 utilize an LSTM model to obtain embeddings from the syntactic dependency paths; while BIBREF24 construct Graph Convolutional Networks to encode the dependency structure. Although BIBREF8's approach is a pure end-to-end learning, they have included an analysis of adding syntactic dependency information into English SRL in the discussion section. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and BIBREF9 have compared different ways to represent and encode the syntactic knowledge. In another line of research, BIBREF14 utilize the Transformer network for the encoder instead of the BiLSTM. BIBREF15 present a novel and effective multi-head self-attention model to incorporate syntax, which is called LISA (Linguistically-Informed Self-Attention). We follow their approach of replacing one attention head with the dependency head information, but use a softer way to capture the pairwise relationship between input elements BIBREF16. For the datasets and annotations of the SRL task, most of the previous research focuses on 1) PropBank BIBREF27 and NomBank BIBREF28 annotations, i.e., the CoNLL 2005 BIBREF18 and CoNLL 2009 BIBREF20 shared tasks; 2) OntoNotes annotations BIBREF29, i.e., the CoNLL 2005 and CoNLL 2012 datasets and more; 3) and FrameNet BIBREF30 annotations. For the non-English languages, not all of them are widely available. Apart from these, in the broad range of semantic processing, other formalisms non-exhaustively include abstract meaning representation BIBREF31, universal decompositional semantics BIBREF32, and semantic dependency parsing BIBREF33. BIBREF34 give a better overview of various semantic representations. In this paper, we primarily work on the Chinese and English datasets from the CoNLL-2009 shared task and focus on the effectiveness of incorporating syntax into the Chinese SRL task. Approaches In this section, we first introduce the basic architecture of our self-attention-based SRL model, and then present two different ways to encode the syntactic dependency information. Afterwards, we compare three approaches to incorporate the syntax into the base model, concatenation to the input embedding, LISA, and our proposed relation-aware method. Approaches ::: The Basic Architecture Our basic model is a multi-head self-attention-based model, which is effective in SRL task as previous work proves BIBREF35. The model consists of three layers: the input layer, the encoder layer and the prediction layer as shown in Figure FIGREF5. Approaches ::: The Basic Architecture ::: Input Layer The input layer contains three types of embeddings: token embedding, predicate embedding, and positional embedding. Token Embedding includes word embedding, part-of-speech (POS) tag embedding. Predicate Embedding has been proposed by BIBREF8, and its binary embedding is used to indicate the predicates indices in each sentence. Positional Embedding encodes the order of the input word sequence. We follow BIBREF13 to use time positional embedding, which is formulated as follows: where $t$ is the position, $i$ means the dimension, and $d$ is the dimension of the model input embedding. Approaches ::: The Basic Architecture ::: Encoder Layer The self-attention block is almost the same as Transformer encoder proposed by BIBREF13. Specifically the Transformer encoder contains a feed-forward network (FFN) and a multi-head attention network. The former is followed by the latter. In this work, we exchange their order, so that the multi-head attention module is moved behind the FFN module as Figure FIGREF5 shows. FFN The FFN module consists of two affine layers with a ReLU activation in the middle. Formally, we have the following equation: Multi-Head Attention The basic attention mechanism used in the multi-head attention function is called “Scaled Dot-Product Attention”, which is formulated as follows: where $Q$ is queries, $K$ is keys, and $V$ is values. In the multi-head attention setting, it first maps the input matrix $X$ into queries, keys and values matrices by using $h$ different learned linear projections. Taking queries $Q$ as an example: where $0 \le i < h$. Keys and values use similar projections. On each of these projections, we perform the scaled dot-product attention in parallel. These parallel output values are concatenated and once again projected into the final values. Equation DISPLAY_FORM14 depicts the above operations. where More details about multi-head attention can be found in BIBREF13. Add & Norm We employ a residual connection to each module, followed by a layer normalization BIBREF36 operation. The output of each module is formulated as where $f(x)$ is implemented by each above module. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Head & Relation The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short. Except for LISA, where Dep is a one-hot matrix of dependency head word index described in SECREF25, in other cases, we use the corresponding head word. Rel is the dependency relation between the word and its syntactic head. We take both Dep and Rel as common strings and map them into dense vectors in the similar way of word embedding. Approaches ::: Representation of the Syntactic Dependencies ::: Dependency Path & Relation Path In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath). To generate DepPath & RelPath between candidate argument and predicate, we firstly find their lowest common ancestor. Then we get two sub-paths, one is from the ancestor to the predicate and the other is from the ancestor to the argument. For DepPath, we compute distance from ancestor to predicate and argument respectively and then concatenate two distances with the separator `,'. For RelPath, we concatenate the labels appearing in each sub-path with the separator “_" respectively to get two label paths, and then concatenate the two label paths with the separator `,'. UTF8gbsn As shown in Figure FIGREF21, the lowest common ancestor of the predicate “鼓励 (encourage)" and the candidate argument “农业 (agriculture)" is “鼓励 (encourage)", so their DepPath is “2,0" and its RelPath is “COMP_COMP,". We take both DepPath and RelPath as common strings and map them into dense vectors in the similar way of Dep and Rel. UTF8gbsn Approaches ::: Incorporation Methods ::: Input Embedding Concatenation To incorporate syntactic knowledge, one simple method is to take it as part of the neural network input, denoted as Input. We represent the syntactic information with dense vectors, and concatenate it with other information like word embedding: where $\oplus $ means concatenation; $E_W$ means the original inputs of the neural model and $E_S$ means the embedding of syntax information, such as Dep/Rel or DepPath/RelPath. Approaches ::: Incorporation Methods ::: LISA BIBREF15 propose the linguistically-informed self-attention model (LISA for short) to combine SRL and dependency parsing as multi-task learning in a subtle way. Based on the multi-head self-attention model, LISA uses one attention head to predict the dependency results and it can also directly use pre-trained dependency head results to replace the attention matrix during testing. Being different from their multi-task learning, we make the replacement of one attention head during both training and testing. Instead of the original $softmax$ attention matrix, we use a one-hot matrix, generated by mapping the dependency head index of each word into a 0-1 vector of the sentence length as Figure FIGREF27 shows. We add the dependency relation information with $V$ in the replaced head so that we can make full use of the syntactic knowledge. The replaced attention head is formulated as follows: where $M_D$ is the one-hot dependency head matrix and $E_R$ means the embedding of dependency relation information, such as Rel or RelPath. Approaches ::: Incorporation Methods ::: Relation-Aware Self-Attention Relation-aware self-attention model (RelAwe for brevity) incorporates external information into the attention. By this way, the model considers the pairwise relationships between input elements, which highly agrees with the task of SRL, i.e., aiming to find the semantic relations between the candidate argument and predicate in one sentence. Compared to the standard attention, in this paper, we add the dependency information into $Q$ and $V$ in each attention head, like equation (DISPLAY_FORM15) shows: where $E_D$ and $E_R$ mean the syntactic dependency head and relation information respectively. For our multi-layer multi-head self-attention model, we make this change to each head of the first $N$ self-attention layers. Experiment ::: Settings Datasets & Evaluation Metrics Our experiments are conducted on the CoNLL-2009 shared task dataset BIBREF20. We use the official evaluation script to compare the output of different system configurations, and report the labeled precision (P), labeled recall (R) and labeled f-score (F1) for the semantic dependencies. Word Representations Most of our experiments are conducted in the closed setting without any external word embeddings or data resources than those provided by the CoNLL-2009 datasets. In the closed setting, word embedding is initialized by a Gaussian distribution with mean 0 and variance $\frac{1}{\sqrt{d}}$, where $d$ is the dimension of embedding size of each layer. For the experiments with external resources in the open setting, we utilize 1) word embeddings pre-trained with GloVe BIBREF37 on the Gigaword corpus for Chinese and the published embeddings with 100 dimensions pre-trained on Wikipedia and Gigaword for English; and 2) ELMo BIBREF38 and BERT BIBREF39, two recently proposed effective deep contextualized word representations. Other embeddings, i.e., POS embedding, linguistic knowledge embedding, and so on are initialized in same way as random word embedding no matter in closed or open setting. Syntactic Parsers In Table TABREF30, both Auto and Gold syntactic dependencies are provided by the dataset. Since the performance of the Auto is far behind the state-of-the-art BiaffineParser BIBREF40, we generate more dependency results by training BiaffineParser with different external knowledge, including pre-trained word embedding and BERT. Performance for different parsers is listed in Table TABREF30. Parameters In this work, we set word embedding size $d_w=100$, POS embedding size $d_t=50$. The predicate embedding size is set as $d_p=100$. The syntax-related embedding size varies along with different configurations, so as the feature embedding size $d_f$. To facilitate residual connections, all sub-layers in the model produce outputs of dimension $d_{model}=d_f+d_p$. The hidden dimension $d_{ff}=800$ is applied for all the experiments. We set the number of shared self-attention blocks $N=10$. The number of heads varies with $d_{model}$, but dimension of each head is 25. Besides, LISA incorporates syntax knowledge in the 5-th self-attention layer while RelAwe incorporates in the first 5 layers. We apply the similar dropout strategy as BIBREF13, i.e., the attention and residual dropout values are $0.2$ and $0.3$ respectively. The dropout is also applied in the middle layer of FFN with value $0.2$. We also employ label smoothing BIBREF41 of value $0.1$ during training. We use softmax-cross-entropy as our loss function, and use the Adadelta optimizer BIBREF42 with $\epsilon =10^{-6}$ and $\rho =0.95$. For all experiments, we train the model $200,000$ steps with learning rate $lr=1.0$, and each batch has 4096 words. All the hyper-parameters are tuned on the development set. Configurations We use different abbreviations to represent the parsing results, syntactic dependency representations, and incorporation methods. All the system configurations in our experiments are listed in Table TABREF36. Experiment ::: Quality of the Syntactic Dependencies We use the above-mentioned dependency trees of different quality for comparison, with Dep&Rel representation on our RelAwe model. In addition, we generate one more data AutoDel by deleting all the erroneous dependency heads and relations from the provided Auto data according to the gold heads and relations, and we do not replace them with any alternative heads and relations. We take this setting as another reference (along with GOLD) to indicate that erroneous syntax information may hurt the performance of the SRL model. We take the Gold as the upperbound reference of our task setting. Experiment results in Table TABREF37 demonstrate that, incorporating syntactic knowledge into the SRL model can achieve better performance and overall, the better the quality is, the better the SRL model performs. This is consistent with the previous study by BIBREF8 on the English dataset. Closer observation reveals two additional interesting phenomena. Firstly, SRL performance improvement is not proportionate to the improvement of dependency quality. When switching syntactic dependency trees from Auto to Biaffine, SRL performance improves 0.5%, although syntactic dependency improves about 8%. In contrast, the difference between Biaffine and BiaffineBert shows more significant improvement of 1.5%. The possible reason is that BiaffineBert provides key dependency information which is missing in other configurations. Secondly, the SRL performance gap between AutoDel and Auto is large though they provide the same correct syntactic information. This may indicate that incorporating erroneous syntactic knowledge hurts the SRL model, and even providing more correct dependencies cannot make up for the harm (cf. BiaffineBert). Experiment ::: Representation of the Syntactic Dependencies Apart from Dep and Rel, we also use DepPath and RelPath to encode the syntactic knowledge. In this subsection, we conduct experiments to compare different syntactic encoding in our SRL model. We base the experiments on our RelAwe model, since it is easier to incorporate different representations for comparison. When generating the RelPath, we filter the paths 1) when the dependency distance between the predicate and the candidate argument is more than 4, and 2) when the RelPath's frequency is less than 10. No matter in which representation, dependency label information is more important than the head and the combination of the two achieves better performance as our experiment results in Table TABREF41 show. Furthermore, using Biaffine dependency trees, DepPath and RelPath perform better than Dep and Rel. This is because of the capability of DepPath and RelPath to capture more structural information of the dependency trees. Comparing Table TABREF37 and TABREF41, when using gold dependencies, DepPath&RelPath can achieve much better result than Dep&Rel. But with the Auto trees, DepPath&RelPath is much worse. Therefore, structural information is much more sensitive to the quality of dependency trees due to error propagation. Experiment ::: Incorporation Methods [9]From the mechanism of LISA, we can find that the replaced attention head can't copy the syntactic dependency heads from DepPath. This subsection discusses the effectiveness of different incorporation methods of the syntactic knowledge. We take Biaffine's output as our dependency information for the comparison. Firstly, results in Table TABREF44 show that with little dependency information (Dep), LISA performs better, while incorporating richer syntactic knowledge (Dep&Rel or Dep&RelPath), three methods achieve similar performance. Overall, RelAwe achieves best results given enough syntactic knowledge. Secondly, Input and LISA achieve much better performance when we combine the dependency head information and the relation, while BIBREF15 have not introduced relation information to the LISA model and BIBREF9 have not combined the head and relation information either. Our proposed RelAwe method with DepPath&RelPath representation performs the best, which encodes the richest syntactic knowledge. Lastly, under the same settings, LISA and RelAwe perform better than Input, which indicates the importance of the location where the model incorporates the syntax, the input layer vs. the encoder layer. Experiment ::: External Resources Apart from the experiments with syntactic knowledge itself, we also compare different external resources to discover their relationship with the syntax, including pre-trained word embeddings, ELMo, and BERT. We conduct experiments with our best setting, the RelAwe model with DepPath & RelPath and the results are listed in Table TABREF45. The plain word embedding improves a little in such settings with syntactic information, while for the newly proposed Elmo and Bert, both of them can boost the models further. Experiment ::: Final Results on the Chinese Test Data Based on the above experiments and analyses, we present the overall results of our model in this subsection. We train the three models (Input, LISA, and RelAwe) with their best settings without any external knowledge as Closed, and we take the same models with Bert as Open. The DepPath&RelPath from Gold without external knowledge serves as the Gold for reference. Since we have been focusing on the task of argument identification and labeling, for both Closed and Open, we follow BIBREF22 to use existing systems' predicate senses BIBREF43 to exclude them from comparison. Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task. Experiment ::: Results on the English Data We also conduct several experiments on the English dataset to validate the effectiveness of our approaches on other languages than Chinese and the results are in Table TABREF49. Although both configurations are not exactly the same as their original papers, we tried our best to reproduce their methods on the CoNLL2009 dataset for our comparison. Overall, the results are consistent with the Chinese experiments, while the improvement is not as large as the Chinese counterparts. The RelAwe model with DepPath&RelPath still achieves the best performance. Applying our syntax-enhanced model to more languages will be an interesting research direction to work on in the future. [10]We reimplement LISA in BIBREF15 as LISA(Dep), and BIBREF9's best DepPath approach as Input(DepPath). Therefore, we can compare with their work as fairly as possible. Other settings are the best configurations for their corresponding methods. Conclusion and Future Work This paper investigates how to incorporate syntactic dependency information into semantic role labeling in depth. Firstly, we confirm that dependency trees of better quality are more helpful for the SRL task. Secondly, we present different ways to encode the trees and the experiments show that keeping more (correct) structural information during encoding improves the SRL performance. Thirdly, we compare three incorporation methods and discover that our proposed relation-aware self-attention-based model is the most effective one. Although our experiments are primarily on the Chinese dataset, the approach is largely language independent. Apart from our tentative experiments on the English dataset, applying the approach to other languages will be an interesting research direction to work on in the future.
Marcheggiani and Titov (2017) and Cai et al. (2018)
ead5dc1f3994b2031a1852ecc4f97ac5760ea977
ead5dc1f3994b2031a1852ecc4f97ac5760ea977_0
Q: How many category tags are considered? Text: Introduction The substantial amount of freely available video material has brought up the need for automatic methods to summarize and compactly represent the essential content. One approach would be to produce a short video skim containing the most important video segments as proposed in the video summarization task BIBREF0. Alternatively, the video content could be described using natural language sentences. Such an approach can lead to a very compact and intuitive representation and is typically referred to as video captioning in the literature BIBREF1. However, producing a single description for an entire video might be impractical for long unconstrained footage. Instead, dense video captioning BIBREF2 aims, first, at temporally localizing events and, then, at producing natural language description for each of them. Fig. FIGREF1 illustrates dense video captions for an example video sequence. Most recent works in dense video captioning formulate the captioning problem as a machine translation task, where the input is a set of features extracted from the video stream and the output is a natural language sentence. Thus, the captioning methods can be leveraged by recent developments in machine translation field, such as Transformer model BIBREF3. The main idea in the transformer is to utilise self-attention mechanism to model long-term dependencies in a sequence. We follow the recent work BIBREF4 and adopt the transformer architecture in our dense video captioning model. The vast majority of previous works are generating captions purely based on visual information BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, almost all videos include an audio track, which could provide vital cues for video understanding. In particular, what is being said by people in the video, might make a crucial difference to the content description. For instance, in a scene when someone knocks the door from an opposite side, we only see the door but the audio helps us to understand that somebody is behind it and wants to enter. Therefore, it is impossible for a model to make a useful caption for it. Also, other types of videos as instruction videos, sport videos, or video lectures could be challenging for a captioning model. In contrast, we build our model to utilize video frames, raw audio signal, and the speech content in the caption generation process. To this end, we deploy automatic speech recognition (ASR) system BIBREF11 to extract time-aligned captions of what is being said (similar to subtitles) and employ it alongside with video and audio representations in the transformer model. The proposed model is assessed using the challenging ActivityNet Captions BIBREF2 benchmark dataset, where we obtain competitive results to the current state-of-the-art. The subsequent ablation studies indicate a substantial contribution from audio and speech signals. Moreover, we retrieve and perform breakdown analysis by utilizing previously unused video category tags provided with the original YouTube videos BIBREF12. The program code of our model and the evaluation approach will be made publicly available. Related Work ::: Video Captioning Early works in video captioning applied rule-based models BIBREF13, BIBREF14, BIBREF15, where the idea was to identify a set of video objects and use them to fill predefined templates to generate a sentence. Later, the need for sentence templates was omitted by casting the captioning problem as a machine translation task BIBREF16. Following the success of neural models in translation systems BIBREF17, similar methods became widely popular in video captioning BIBREF18, BIBREF19, BIBREF20, BIBREF1, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. The rationale behind this approach is to train two Recurrent Neural Networks (RNNs) in an encoder-decoder fashion. Specifically, an encoder inputs a set of video features, accumulates its hidden state, which is passed to a decoder for producing a caption. To further improve the performance of the captioning model, several methods have been proposed, including shared memory between visual and textual domains BIBREF26, BIBREF27, spatial and temporal attention BIBREF28, reinforcement learning BIBREF29, semantic tags BIBREF30, BIBREF31, other modalities BIBREF32, BIBREF33, BIBREF34, BIBREF35, and by producing a paragraph instead of one sentence BIBREF36, BIBREF1. Related Work ::: Dense Video Captioning Inspired by the idea of the dense image captioning task BIBREF37, Krishna BIBREF2 introduced a problem of dense video captioning and released a new dataset called ActivityNet Captions which leveraged the research in the field BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF38, BIBREF10. In particular, BIBREF5 adopted the idea of the context-awareness BIBREF2 and generalized the temporal event proposal module to utilize both past and future contexts as well as an attentive fusion to differentiate captions from highly overlapping events. Meanwhile, the concept of Single Shot Detector (SSD) BIBREF39 was also used to generate event proposals and reward maximization for better captioning in BIBREF6. In order to mitigate the intrinsic difficulties of RNNs to model long-term dependencies in a sequence, Zhou BIBREF4 tailored the recent idea of Transformer BIBREF3 for dense video captioning. In BIBREF7 the authors noticed that the captioning may benefit from interactions between objects in a video and developed recurrent higher-order interaction module to model these interactions. Xiong BIBREF8 noticed that many previous models produced redundant captions, and proposed to generate captions in a progressive manner, conditioned on the previous caption while applying paragraph- and sentence-level rewards. Similarly, a “bird-view” correction and two-level reward maximization for a more coherent story-telling have been employed in BIBREF9. Since the human annotation of a video with temporal boundaries and captions for each of them can be laborious, several attempts have been made to address this issue BIBREF40, BIBREF41. Specifically, BIBREF40 employed the idea of cycle-consistency to translate a set of captions to a set of temporal events without any paired annotation, while BIBREF41 automatically-collected dataset of an unparalleled-scale exploiting the structure of instructional videos. The most similar work to our captioning model is BIBREF4 that also utilizes a version of the Transformer BIBREF3 architecture. However, their model is designed solely for visual features. Instead, we believe that dense video captioning may benefit from information from other modalities. Related Work ::: Multi-modal Dense Video Captioning A few attempts has been made to include additional cues like audio and speech BIBREF38, BIBREF42, BIBREF43 for dense video captioning task. Rahman BIBREF38 utilized the idea of cycle-consistency BIBREF40 to build a model with visual and audio inputs. However, due to weak supervision, the system did not reach high performance. Hessel BIBREF42 and Shi BIBREF43 employ a transformer architecture BIBREF3 to encode both video frames and speech segments to generate captions for instructional (cooking) videos. Yet, the high results on a dataset which is restricted to instructional video appear to be not evidential as the speech and the captions are already very close to each other in such videos BIBREF41. In contrast to the mentioned multi-modal dense video captioning methods: (1) we present the importance of the speech and audio modalities on a domain-free dataset, (2) propose a multi-modal dense video captioning module (MDVC) which can be scaled to any number of modalities. Proposed Framework In this section, we briefly outline the workflow of our method referred to as Multi-modal Dense Video Captioning (MDVC) which is shown in Fig. FIGREF5. The goal of our method is to temporally localize events on a video and to produce a textual description for each of them. To this end, we apply a two-stage approach. Firstly, we obtain the temporal event locations. For this task, we employ the Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5. Bi-SST applies 3D Convolution network (C3D) BIBREF44 to video frames and extracts features that are passed to subsequent bi-directional LSTM BIBREF45 network. The LSTM accumulates visual cues over time and predicts confidence scores for each location to be start/end point of an event. Finally, a set of event proposals (start/end times) is obtained and passed to the second stage for caption generation. Secondly, we generate the captions given a proposal. To produce inputs from audio, visual, and speech modalities, we use Inflated 3D convolutions (I3D) BIBREF46 for visual and VGGish network BIBREF47 for audio modalities. For speech representation as a text, we employ an external ASR system BIBREF11. To represent the text into a numerical form, we use a similar text embedding which is used for caption encoding. The features are, then, fed to individual transformer models along with the words of a caption from the previous time steps. The output of the transformer is passed into a generator which fuses the outputs from all modalities and estimates a probability distribution over the word vocabulary. After sampling the next word, the process is repeated until a special end token is obtained. Fig. FIGREF1 illustrates an example modality and the corresponding event captions. Proposed Framework ::: Temporal Event Localization Module An event localization module is dedicated to generating a set of temporal regions which might contain an event. To achieve this, we employ pre-trained Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5 as it has is been shown to reach good performance in the proposal generation task. Bi-SST inputs a sequence of $T$ RGB frames from a video $V = (x_1, x_2, \dots , x_F)$ and extracts a set of 4096-d features $V^{\prime } = (f_1, f_2, \dots , f_T)$ by applying a 3D Convolution network (C3D) on non-overlapping segments of size 16 with a stride of 64 frames. To reduce the feature dimension, only 500 principal components were selected using PCA. To account for the video context, events are proposed during forward and backward passes on a video sequence $V^{\prime }$, and, then, the resulting scores are fused together to obtain the final proposal set. Specifically, during the forward pass, LSTM is used to accumulate the visual clues from the “past” context at each position $t$ which is treated as an ending point and produce confidence scores for each proposal. Afterwards, a similar procedure is performed during the backward pass where the features $V^{\prime }$ are used in a reversed order. This empowers the model to have a sense of the “future” context in a video. In contrast to the forward pass, each position is treated as a starting point of the proposal. Finally, the confidence scores from both passes are fused by multiplication of corresponding scores for each proposal at each time step, and, then, filtered according to a predefined threshold. Finally, we obtain a set of $N_V$ event proposals for caption generation $P_V=\lbrace p_j = (\text{start}_j, \text{end}_j, \text{score}_j)\rbrace _{j=1}^{N_V}$. Proposed Framework ::: Captioning Module In this section we explain the captioning based for an example modality, namely, visual. Given a video $V$ and a set of proposals $P_V$ from the event localization module, the task of the captioning module is to provide a caption for each proposal in $P_V$. In order to extract features from a video $V$, we employ I3D network BIBREF46 pre-trained on the Kinetics dataset which produces 1024-d features. The gap between the extracted features and the generated captions is filled with Transformer BIBREF3 architecture which was proven to effectively encode and decode the information in a sequence-to-sequence setting. Proposed Framework ::: Captioning Module ::: Feature Transformer As shown in Fig. FIGREF6, Feature Transformer architecture mainly consists of three blocks: an encoder, decoder, and generator. The encoder inputs a set of extracted features $ \mathbf {v}^j = (v_1, v_2, \dots , v_{T_j}) $ temporally corresponding to a proposal $p_j$ from $P_V$ and maps it to a sequence of internal representations $ \mathbf {z}^j = (z_1, z_2, \dots , z_{T_j}) $. The decoder is conditioned on the output of the encoder $\mathbf {z}^j$ and the embedding $ \mathbf {e}^j_{\leqslant t} = (e_1, e_2, \dots , e_t)$ of the words in a caption $ \mathbf {w}^j_{\leqslant t} = (w_1, w_2, \dots , w_t) $. It produces the representation $ \mathbf {g}^j_{\leqslant t} = (g_1, g_2, \dots , g_t) $ which, in turn, is used by the generator to model a distribution over a vocabulary for the next word $ p(w_{t+1}|\mathbf {g}^j_{\leqslant t}) $. The next word is selected greedily by obtaining the word with the highest probability until a special ending token is sampled. The captioning is initialized with a starting token. Both are added to the vocabulary. Before providing an overview of the encoder, decoder, and generator, we presenting the notion of multi-headed attention that acts as an essential part of the decoder and encoder blocks. The concept of the multi-head attention, in turn, heavily relies on dot-product attention which we describe next. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Dot-product Attention The idea of the multi-headed attention rests on the scaled dot-product attention which calculates the weighted sum of values. The weights are obtained by applying the softmax function on the dot-product of each pair of rows of queries and keys scaled by $\frac{1}{\sqrt{D_k}}$. The scaling is done to prevent the softmax function from being in the small gradient regions BIBREF3. Formally the scaled dot-product attention can be represented as follows where $Q, K, V $ are queries, keys, and values, respectively. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Multi-headed Attention The multi-headed attention block is used once in each encoder layer and twice in each decoder layer. The block consists of $H$ heads that allows to cooperatively account for information from several representations sub-spaces at every position while preserving the same computation complexity BIBREF3. In a transformer with dimension $D_T$, each head is defined in the following way where $q, k, v$ are matrices which have $D_T$ columns and the number of rows depending on the position of the multi-headed block, yet with the same number of rows for $k$ and $v$ to make the calculation in (DISPLAY_FORM11) to be feasible. The $W^{q}_h, W^{k}_h, W^{v}_h \in \mathbb {R}^{D_T \times D_k}$ are trainable projection matrices that map $q, k , v$ from $D_T$ into $D_k= \frac{D_T}{H}$, asserting $D_T$ is a multiple of $H$. The multi-head attention, in turn, is the concatenation of all attention heads mapped back into $D_T$ by trainable parameter matrix $W^o \in \mathbb {R}^{D_k \cdot H \times D_T}$: Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Encoder The encoder consists of $ L $ layers. The first layer inputs a set of features $ \mathbf {v}^j $ and outputs an internal representation $ \mathbf {z}_1^j \in \mathbb {R}^{T_j \times D_T} $ while each of the next layers treats the output of a previous layer as its input. Each encoder layer $l$ consist of two sub-layers: multi-headed attention and position-wise fully connected network which are explained later in this section. The input to both sub-layers are normalized using layer normalization BIBREF48, each sub-layer is surrounded by a residual connection BIBREF49 (see Fig. FIGREF6). Formally, the $l$-th encoder layer has the following definition where $\text{FCN}$ is the position-wise fully connected network. Note, the multi-headed attention has identical queries, keys, and values ($ \overline{\mathbf {z}}_l^j $). Such multi-headed attention block is also referred to as self-multi-headed attention. It enables an encoder layer $l$ to account for the information from all states from the previous layer $ \mathbf {z}_{l-1}^j$. This property contrasts with the idea of RNN which accumulates only the information from the past positions. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Decoder Similarly to the encoder, the decoder has $ L $ layers. At a position $t$, the decoder inputs a set of embedded words $\mathbf {e}^j_{\leqslant t}$ with the output of the encoder $\mathbf {z}^j$ and sends the output to the next layer which is conditioned on this output and, again, the encoder output $\mathbf {z}^j$. Eventually, the decoder producing its internal representation $\mathbf {g}_{\leqslant t}^j \in \mathbb {R}^{t \times D_T}$. The decoder block is similar to the encoder but has an additional sub-layer that applies multi-headed attention on the encoder output and the output of its previous sub-layer. The decoder employs the layer normalization and residual connections at all three sub-layers in the same fashion as the encoder. Specifically, the $l$-th decoder layer has the following form: where $ \mathbf {z}^j $ is the encoder output. Note, similarly to the encoder, (DISPLAY_FORM18) is a self-multi-headed attention function while the second multi-headed attention block attends on both the encoder and decoder and is also referred to as encoder-decoder attention. This block enables each layer of the decoder to attend all state of the encoder's output $ \mathbf {z}^j$. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Position-wise Fully-Connected Network The fully connected network is used in each layer of the encoder and the decoder. It is a simple two-layer neural network that inputs $x$ with the output of the multi-head attention block, and, then, projects each row (or position) of the input $x$ from $D_T$ space onto $D_P$, $(D_P > D_T)$ and back, formally: where $W_1 \in \mathbb {R}^{D_T \times D_P}$, $W_2 \in \mathbb {R}^{D_P \times D_T}$, and biases $b_1, b_2$ are trainable parameters, $\text{ReLU}$ is a rectified linear unit. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Generator At the position $t$, the generator consumes the output of the decoder $\mathbf {g}^j_{\leqslant t}$ and produces a distribution over the vocabulary of words $p(w_{t+1}| \mathbf {g}^j_{\leqslant t})$. To obtain the distribution, the generator applies the softmax function of the output of a fully connected layer with a weight matrix $W_G \in \mathbb {R}^{D_T \times D_V}$ where $D_V$ is a vocabulary size. The word with the highest probability is selected as the next one. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Input Embedding and Positional Encoding Since the representation of textual data is usually sparse due to a large vocabulary, the dimension of the input of a neural language model is reduced with an embedding into a dimension of a different size, namely $D_T$. Also, following BIBREF3, we multiply the embedding weights by $\sqrt{D_T}$. The position encoding is required to allow the transformer to have a sense of the order in an input sequence. We adopt the approach proposed for a transformer architecture, i. e. we add the output of the combination of sine and cosine functions to the embedded input sequence BIBREF3. Proposed Framework ::: Captioning Module ::: Multi-modal Dense Video Captioning In this section, we present the multi-modal dense video captioning module which, utilises visual, audio, and speech modalities. See Fig. FIGREF6 for a schematic representation of the module. For the sake of speech representation $\mathbf {s}^j = (s_1, s_2, \dots , s_{T_j^s})$, we use the text embedding of size 512-d that is similar to the one which is employed in the embedding of a caption $\mathbf {w}^j_{\leqslant t}$. To account for the audio information, given a proposal $p_j$ we extract a set of features $\mathbf {a}_j = (a_1, a_2, \dots , a_{T_j^a})$ applying the 128-d embedding layer of the pre-trained VGGish network BIBREF47 on an audio track. While the visual features $\mathbf {v}^j = (v_1, v_2, \dots v_{T_j^v}) $ are encoded with 1024-d vectors by Inflated 3D (I3D) convolutional network BIBREF46. To fuse the features, we create an encoder and a decoder for each modality with dimensions corresponding to the size of the extracted features. The outputs from all decoders are fused inside of the generator, and the distribution of a next word $w_{t+1}$ is formed. In our experimentation, we found that a simple two-layer fully-connected network applied of a matrix of concatenated features performs the best with the ReLU activation after the first layer and the softmax after the second one. Each layer of the network has a matrix of trainable weights: $W_{F_1} \in \mathbb {R}^{D_F \times D_V}$ and $W_{F_2} \in \mathbb {R}^{D_V \times D_V}$ with $D_F = 512 + 128 + 1024 $ and $D_V$ is a vocabulary size. Proposed Framework ::: Model Training As the training is conducted using mini-batches of size 28, the features in one modality must be of the same length so the features could be stacked into a tensor. In this regard, we pad the features and the embedded captions to match the size of the longest sample. The model is trained by optimizing the Kullback–Leibler divergence loss which measures the “distance” between the ground truth and predicted distributions and averages the values for all words in a batch ignoring the masked tokens. Since many words in the English language may have several synonyms or human annotation may contain mistakes, we undergo the model to be less certain about the predictions and apply Label Smoothing BIBREF50 with the smoothing parameter $\gamma $ on the ground truth labels to mitigate this. In particular, the ground truth distribution over the vocabulary of size $D_V$, which is usually represented as one-hot encoding vector, the identity is replaced with probability $1-\gamma $ while the rest of the values are filled with $\frac{\gamma }{D_V-1}$. During training, we exploit the teacher forcing technique which uses the ground truth sequence up to position $t$ as the input to predict the next word instead of using the sequence of predictions. As we input the whole ground truth sequence at once and predicting the next words at each position, we need to prevent the transformer from peeping for the information from the next positions as it attends to all positions of the input. To mitigate this, we apply masking inside of the self-multi-headed attention block in the decoder for each position higher than $t-1$, following BIBREF3. The details on the feature extraction and other implementation details are available in the supplementary materials. Experiments ::: Dataset We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. Each video, on average, contains 3.65 temporally localized captions, around 13.65 words each, and two minutes long. In addition, each video in the validation set is annotated twice by different annotators. We report all results using the validation set (no ground truth is provided for the test set). The dataset itself is distributed as a collection of links to YouTube videos, some of which are no longer available. Authors provide pre-computed C3D features and frames at 5fps, but these are not suitable for our experiments. At the time of writing, we found 9,167 (out of 10,009) training and 4,483 (out of 4,917) validation videos which is, roughly, 91 % of the dataset. Out of these 2,798 training and 1,374 validation videos (approx. 28 %) contain at least one speech segment. The speech content was obtained from the closed captions (CC) provided by the YouTube ASR system which can be though as subtitles. Experiments ::: Metrics We are evaluating the performance of our model using BLEU@N BIBREF51 and METEOR BIBREF52. We regard the METEOR as our primary metric as it has been shown to be highly correlated with human judgement in a situation with a limited number of references (only one, in our case). We employ the official evaluation script provided in BIBREF53. Thus, the metrics are calculated if a proposed event and a ground truth location of a caption overlaps more than a specified temporal Intersection over Union (tIoU) and zero otherwise. All metric values are averaged for every video, and, then, for every threshold tIoU in $[0.3, 0.5, 0.7, 0.9]$. On the validation, we average the resulting scores for both validation sets. For the learned proposal setting, we report our results on at most 100 proposals per video. Notably, up to early 2017, the evaluation code had an issue which previously overestimated the performance of the algorithms in the learned proposal setting BIBREF9. Therefore, we report the results using the new evaluation code. Experiments ::: Comparison with Baseline Methods We compare our method with five related approaches, namely Krishna BIBREF2, Wang BIBREF5, Zhou BIBREF4, Li BIBREF6, and Rahman BIBREF38. We take the performance values from the original papers, except for BIBREF6, and BIBREF4, which are taken from BIBREF9 due to the evaluation issue (see Sec. SECREF27). The lack of access to the full ActivityNet Captions dataset makes strictly fair comparison difficult as we have less training and validation videos. Nevertheless, we present our results in two set-ups: 1) full validation set with random input features for missing entries, and 2) videos with all three modalities present (video, audio, and speech). The first one is chosen to indicate the lower bound of our performance with the full dataset. Whereas, the second one (referred to as “no missings”) concentrates on the multi-modal setup, which is the main contribution of our work. The obtained results are presented in Tab. TABREF25. Our method (MDVC) achieves comparable or better performance, even though we have access to smaller training set and 9 % of the validation videos are missing (replaced with random input features). Furthermore, if all three modalities are present, our method outperforms all baseline approaches in the case of both GT and learned proposals. Notably, we outperform BIBREF4 which is also based on the transformer architecture and account for the optical flow. This shows the superior performance of our captioning module which, yet, trained on the smaller amount of data. Experiments ::: Ablation Studies In this section, we perform an ablation analysis highlighting the effect of different design choices of our method. For all experiments, we use the full unfiltered ActivityNet Captions validation set with ground truth event proposals. Firstly, we assess the selection of the model architecture. To this end, we implemented a version of our method where the transformer was replaced by Bidirectional Recurrent Neural Network with Gated Recurrent Units with attention (Bi-GRU), proposed in BIBREF54. To distil the effect of the change in architecture, the results are shown for visual-only models. Both Bi-GRU and the transformer input I3D features extracted from 64 RGB and optical flow frames (the final model inputs 24 frames). Finally, we set a lower bound for the feature performance by training a transformer model with random video features. Tab. TABREF32 shows the comparison. To conclude, we observe that the feature transformer-based model is not only lighter but also achieves better performance in dense video captioning task. Moreover, both method clearly surpasses the random baseline. Secondly, we evaluate the contribution of different modalities in our framework. Tab. TABREF33 contains the results for different modality configurations as well as for two feature fusion approaches. Specifically, averaging of the output probabilities and concatenation of the outputs of all modalities and applying two fully connected (FC) layers on top. We observe that audio-only model has the worst performance, followed by the visual only model, and the combination of these two. Moreover, the concatenation and FC layers result in better performance than averaging. To further assess if the performance gain is due to the additional modalities or to the extra capacity in the FC layers, we trained a visual-only model with two additional FC layers. The results indicate that such configuration performs worse than any bi-modal setup. Overall, we conclude that the final model with all three modalities performs best among all tested set-ups, which highlights the importance of multi-modal setting in dense video captioning task. Fig. FIGREF29 shows a qualitative comparison between different models in our ablation study. Moreover, we provide the corresponding captions from the best performing baseline method (Zhuo BIBREF4). We noticed the following pattern: the audio-modality produces coherent sentences and captures the concepts of speaking in the video. However, there are clear mistakes in the caption content. In contrast, the model with all three modalities manages to capture the man who speaks to the camera which is also present in the ground truth. Both visual-only MDVC and Zhuo struggle to describe the audio details. Finally, to test whether our model improves the performance in general rather than in a specific video category, we report the comparison of the different versions of MDVC per category. To this end, we retrieve the category labels from the YouTubeAPI BIBREF12 (US region) for every available ActivityNet Captions validation video. These labels are given by the user when uploading the video and roughly represent the video content type. The comparison is shown in Fig. FIGREF31. The results imply a consistent gain in performance within each category except for categories: “Film & Animation” and “Travel & Events” which might be explained by the lack of correspondence between visual and audio tracks. Specifically, the video might be accompanied by music, e. g. promotion of a resort. Also, “Film & Animation” contains cartoon-like movies which might have a realistic soundtrack while the visual track is goofy. Conclusion The use of different modalities in computer vision is still an underrepresented topic and, we believe, deserves more attention. In this work, we introduced a multi-modal dense video captioning module (MDVC) and shown the importance of the audio and speech modalities for dense video captioning task. Specifically, MDVC is based on the transformer architecture which encodes the feature representation of each modality for a specific event proposal and produces a caption using the information from these modalities. The experimentation, conducted employing the ActivityNet Captions dataset, shows the superior performance of a captioning module to the visual-only models in the existing literature. Extensive ablation study verifies this conclusion. We believe that our results firmly indicate that future works in video captioning should utilize a multi-modal input. Supplementary Material The supplementary material consists of four sections. In Section SECREF35, we provide qualitative results of the MDVC on another example video. The details on features extraction and implementation are described in Section SECREF36 and SECREF38. Finally, the comparison with other methods is shown in Section SECREF39. Supplementary Material ::: Qualitative Results (Another Example) In Figure FIGREF34, we provide qualitative analysis of captioning on another video from ActivityNet Captions validation set to emphasize the importance of additional modalities for dense video captioning, namely, speech and audio. We compare the captioning proposed by MDVC (our model) conditioned on different sets of modalities: audio-only (A-only), visual-only (V-only), and including all modalities (S + A + V). Additionally, we provide the results of a captioning model proposed in Zhou BIBREF4 (visual only) which showed the most promising results according to METEOR. More precisely, the video (YouTube video id: EGrXaq213Oc) lasts two minutes and contains 12 human annotations. The video is an advertisement for snowboarding lessons for children. It shows examples of children successfully riding a snowboard on a hill and supportive adults that help them to learn. A lady narrates the video and appears in the shot a couple of times. Generally, we may observe that MDVC with the audio modality alone (A-only) mostly describes that a woman is speaking which is correct according to the audio content yet the details about snowboarding and children are missing. This is expectedly challenging for the network as no related sound effects to snowboarding are present. In the meantime, the visual-only MDVC grasps the content well, however, misses important details like the gender of the speaker. While the multi-modal model MDVC borrows the advantages of both which results in more accurate captions. The benefits of several modalities stand out in captions for $p_2$ and $p_{10}$ segments. Note that despite the appearance of the lady in the shot during $p_{10}$, the ground truth caption misses it yet our model manages to grasp it. Yet, some limitations of the final model could be noticed as well. In particular, the content of some proposals is dissimilar to the generated captions, e. g. the color of the jacket ($p_4$, $p_5$), or when a lady is holding a snowboard with a child on it while the model predicts that she is holding a ski ($p_7$). Also, the impressive tricks on a snowboard were guessed simply as “ridding down a hill” which is not completely erroneous but still inaccurate ($p_8$). Overall, the model makes reasonable mistakes except for proposals $p_3$ and $p_4$. Finally, the generated captions provide more general description of a scene compared to the ground truth that is detailed and specific which could be a subject for future investigation. Supplementary Material ::: Details on Feature Extraction Before training, we pre-calculate the features for both audio and visual modalities. In particular, the audio features were extracted using VGGish BIBREF47 which was trained on AudioSet BIBREF55. The input to the VGGish model is a $96\times 64$ log mel-scaled spectrogram extracted for non-overlapping $0.96$ seconds segments. The log mel-scaled spectrogram is obtained by applying Short-Time Fourier Transform on a 16 kHz mono audio track using a periodic Hann window with 25 ms length with 10 ms overlap. The output is a 128-d feature vector after an activation function and extracted before a classification layer. Therefore, the input to MDVC is a matrix with dimension $T_j^a \times 128$ where $T_j^a$ is the number of features proposal $p_j$ consists of. The visual features were extracted using I3D BIBREF46 network which inputs a set of 24 RGB and optical flow frames extracted at 25 fps. The optical flow is extracted with PWC-Net BIBREF58. First, each frame is resized such that the shortest side is 256 pixels. Then, the center region is cropped to obtain $224\times 224$ frames. Both RGB and flow stacks are passed through the corresponding branch of I3D. The output of each branch are summed together producing 1024-d features for each stack of 24 frames. Hence, the resulting matrix has the shape: $T_j^v\times 1024$, where $T_j^v$ is the number of features required for a proposal $p_j$. We use 24 frames for I3D input to temporally match with the input of the audio modality as $\frac{24}{25} = 0.96$. Also note that I3D was pre-trained on the Kinetics dataset with inputs of 64 frames, while we use 24 frames. This is a valid approach since we employ the output of the second to the last layer after activation and average it on the temporal axis. The input for speech modality is represented by temporally allocated text segments in the English language (one could think of them as subtitles). For a proposal $ p_j $, we pick all segments that both: a) end after the proposal starting point, and b) start before the proposal ending point. This provides us with sufficient coverage of what has been said during the proposal segment. Similarly to captions, each word in a speech segment is represented as a number which corresponds to the word's order number in the vocabulary and then passed through the text embedding of size 512. We omit the subtitles that describe the sound like “[Applause]” and “[Music]” as we are only interested in the effect of the speech. Therefore, the speech transformer encoder inputs matrices of shape: $T^s_j\times 512$ where $T^s_j$ is the number of words in corresponding speech for proposal $p_j$. Supplementary Material ::: Implementation Details Since no intermediate layers connecting the features and transformers are used, the dimension of the features transformers $D_T$ corresponds to the size of the extracted features: 512, 128, and 1024 for speech, audio, and visual modalities, respectively. Each feature transformer has one layer ($L$), while the internal layer in the position-wise fully-connected network has $D_P=2048$ units for all modality transformers which was found to perform optimally. We use $H=4$ heads in all multi-headed attention blocks. The captions and speech vocabulary sizes are 10,172 and 23,043, respectively. In all experiments, except for the audio-only model, we use Adam optimizer BIBREF56, a batch containing features for 28 proposals, learning rate $10^{-5}$, $\beta = (0.9, 0.99)$, smoothing parameter $\gamma = 0.7$. In the audio-only model, we apply two-layered transformer architecture with learning rate $10^{-4}$ and $\gamma = 0.2$. To regularize the weights of the model, in every experiment, Dropout BIBREF57 with $p = 0.1$ is applied to the outputs of positional encoding, in every sub-layer before adding a residual, and after the first internal layer of the multi-modal generator. During the experimentation, models were trained for 200 epochs at most and stopped the training early if for 50 consecutive epochs the average METEOR score calculated on ground truth event proposals of both validation sets has not improved. At the end of the training, we employ the best model to estimate its performance on the learned temporal proposals. Usually the training for the best models culminated by 50th epoch, e. g. the final model (MDVC (S + A + V)) was trained for 30 epochs which took, roughly, 15 hours on one consumer-type GPU (Nvidia GeForce RTX 2080 Ti). The code for training heavily relies on PyTorch framework and will be released upon publication. Supplementary Material ::: Comparison with Other Methods In Table TABREF37, we present a comparison with another body of methods BIBREF8, BIBREF9 which were not included in the main comparison as they were using Reinforcement Learning (RL) approach to directly optimize the non-differentiable metric (METEOR). We believe that our method could also benefit from these as the ablation studies BIBREF8, BIBREF9 show significant gains obtained by applying them. As it was anticipated, in general, methods which employ reinforcement learning perform better in terms of METEOR. Interestingly, our model still outperforms BIBREF8 which uses RL in the captioning module.
14 categories
86cd1228374721db67c0653f2052b1ada6009641
86cd1228374721db67c0653f2052b1ada6009641_0
Q: What domain does the dataset fall into? Text: Introduction The substantial amount of freely available video material has brought up the need for automatic methods to summarize and compactly represent the essential content. One approach would be to produce a short video skim containing the most important video segments as proposed in the video summarization task BIBREF0. Alternatively, the video content could be described using natural language sentences. Such an approach can lead to a very compact and intuitive representation and is typically referred to as video captioning in the literature BIBREF1. However, producing a single description for an entire video might be impractical for long unconstrained footage. Instead, dense video captioning BIBREF2 aims, first, at temporally localizing events and, then, at producing natural language description for each of them. Fig. FIGREF1 illustrates dense video captions for an example video sequence. Most recent works in dense video captioning formulate the captioning problem as a machine translation task, where the input is a set of features extracted from the video stream and the output is a natural language sentence. Thus, the captioning methods can be leveraged by recent developments in machine translation field, such as Transformer model BIBREF3. The main idea in the transformer is to utilise self-attention mechanism to model long-term dependencies in a sequence. We follow the recent work BIBREF4 and adopt the transformer architecture in our dense video captioning model. The vast majority of previous works are generating captions purely based on visual information BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, almost all videos include an audio track, which could provide vital cues for video understanding. In particular, what is being said by people in the video, might make a crucial difference to the content description. For instance, in a scene when someone knocks the door from an opposite side, we only see the door but the audio helps us to understand that somebody is behind it and wants to enter. Therefore, it is impossible for a model to make a useful caption for it. Also, other types of videos as instruction videos, sport videos, or video lectures could be challenging for a captioning model. In contrast, we build our model to utilize video frames, raw audio signal, and the speech content in the caption generation process. To this end, we deploy automatic speech recognition (ASR) system BIBREF11 to extract time-aligned captions of what is being said (similar to subtitles) and employ it alongside with video and audio representations in the transformer model. The proposed model is assessed using the challenging ActivityNet Captions BIBREF2 benchmark dataset, where we obtain competitive results to the current state-of-the-art. The subsequent ablation studies indicate a substantial contribution from audio and speech signals. Moreover, we retrieve and perform breakdown analysis by utilizing previously unused video category tags provided with the original YouTube videos BIBREF12. The program code of our model and the evaluation approach will be made publicly available. Related Work ::: Video Captioning Early works in video captioning applied rule-based models BIBREF13, BIBREF14, BIBREF15, where the idea was to identify a set of video objects and use them to fill predefined templates to generate a sentence. Later, the need for sentence templates was omitted by casting the captioning problem as a machine translation task BIBREF16. Following the success of neural models in translation systems BIBREF17, similar methods became widely popular in video captioning BIBREF18, BIBREF19, BIBREF20, BIBREF1, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. The rationale behind this approach is to train two Recurrent Neural Networks (RNNs) in an encoder-decoder fashion. Specifically, an encoder inputs a set of video features, accumulates its hidden state, which is passed to a decoder for producing a caption. To further improve the performance of the captioning model, several methods have been proposed, including shared memory between visual and textual domains BIBREF26, BIBREF27, spatial and temporal attention BIBREF28, reinforcement learning BIBREF29, semantic tags BIBREF30, BIBREF31, other modalities BIBREF32, BIBREF33, BIBREF34, BIBREF35, and by producing a paragraph instead of one sentence BIBREF36, BIBREF1. Related Work ::: Dense Video Captioning Inspired by the idea of the dense image captioning task BIBREF37, Krishna BIBREF2 introduced a problem of dense video captioning and released a new dataset called ActivityNet Captions which leveraged the research in the field BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF38, BIBREF10. In particular, BIBREF5 adopted the idea of the context-awareness BIBREF2 and generalized the temporal event proposal module to utilize both past and future contexts as well as an attentive fusion to differentiate captions from highly overlapping events. Meanwhile, the concept of Single Shot Detector (SSD) BIBREF39 was also used to generate event proposals and reward maximization for better captioning in BIBREF6. In order to mitigate the intrinsic difficulties of RNNs to model long-term dependencies in a sequence, Zhou BIBREF4 tailored the recent idea of Transformer BIBREF3 for dense video captioning. In BIBREF7 the authors noticed that the captioning may benefit from interactions between objects in a video and developed recurrent higher-order interaction module to model these interactions. Xiong BIBREF8 noticed that many previous models produced redundant captions, and proposed to generate captions in a progressive manner, conditioned on the previous caption while applying paragraph- and sentence-level rewards. Similarly, a “bird-view” correction and two-level reward maximization for a more coherent story-telling have been employed in BIBREF9. Since the human annotation of a video with temporal boundaries and captions for each of them can be laborious, several attempts have been made to address this issue BIBREF40, BIBREF41. Specifically, BIBREF40 employed the idea of cycle-consistency to translate a set of captions to a set of temporal events without any paired annotation, while BIBREF41 automatically-collected dataset of an unparalleled-scale exploiting the structure of instructional videos. The most similar work to our captioning model is BIBREF4 that also utilizes a version of the Transformer BIBREF3 architecture. However, their model is designed solely for visual features. Instead, we believe that dense video captioning may benefit from information from other modalities. Related Work ::: Multi-modal Dense Video Captioning A few attempts has been made to include additional cues like audio and speech BIBREF38, BIBREF42, BIBREF43 for dense video captioning task. Rahman BIBREF38 utilized the idea of cycle-consistency BIBREF40 to build a model with visual and audio inputs. However, due to weak supervision, the system did not reach high performance. Hessel BIBREF42 and Shi BIBREF43 employ a transformer architecture BIBREF3 to encode both video frames and speech segments to generate captions for instructional (cooking) videos. Yet, the high results on a dataset which is restricted to instructional video appear to be not evidential as the speech and the captions are already very close to each other in such videos BIBREF41. In contrast to the mentioned multi-modal dense video captioning methods: (1) we present the importance of the speech and audio modalities on a domain-free dataset, (2) propose a multi-modal dense video captioning module (MDVC) which can be scaled to any number of modalities. Proposed Framework In this section, we briefly outline the workflow of our method referred to as Multi-modal Dense Video Captioning (MDVC) which is shown in Fig. FIGREF5. The goal of our method is to temporally localize events on a video and to produce a textual description for each of them. To this end, we apply a two-stage approach. Firstly, we obtain the temporal event locations. For this task, we employ the Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5. Bi-SST applies 3D Convolution network (C3D) BIBREF44 to video frames and extracts features that are passed to subsequent bi-directional LSTM BIBREF45 network. The LSTM accumulates visual cues over time and predicts confidence scores for each location to be start/end point of an event. Finally, a set of event proposals (start/end times) is obtained and passed to the second stage for caption generation. Secondly, we generate the captions given a proposal. To produce inputs from audio, visual, and speech modalities, we use Inflated 3D convolutions (I3D) BIBREF46 for visual and VGGish network BIBREF47 for audio modalities. For speech representation as a text, we employ an external ASR system BIBREF11. To represent the text into a numerical form, we use a similar text embedding which is used for caption encoding. The features are, then, fed to individual transformer models along with the words of a caption from the previous time steps. The output of the transformer is passed into a generator which fuses the outputs from all modalities and estimates a probability distribution over the word vocabulary. After sampling the next word, the process is repeated until a special end token is obtained. Fig. FIGREF1 illustrates an example modality and the corresponding event captions. Proposed Framework ::: Temporal Event Localization Module An event localization module is dedicated to generating a set of temporal regions which might contain an event. To achieve this, we employ pre-trained Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5 as it has is been shown to reach good performance in the proposal generation task. Bi-SST inputs a sequence of $T$ RGB frames from a video $V = (x_1, x_2, \dots , x_F)$ and extracts a set of 4096-d features $V^{\prime } = (f_1, f_2, \dots , f_T)$ by applying a 3D Convolution network (C3D) on non-overlapping segments of size 16 with a stride of 64 frames. To reduce the feature dimension, only 500 principal components were selected using PCA. To account for the video context, events are proposed during forward and backward passes on a video sequence $V^{\prime }$, and, then, the resulting scores are fused together to obtain the final proposal set. Specifically, during the forward pass, LSTM is used to accumulate the visual clues from the “past” context at each position $t$ which is treated as an ending point and produce confidence scores for each proposal. Afterwards, a similar procedure is performed during the backward pass where the features $V^{\prime }$ are used in a reversed order. This empowers the model to have a sense of the “future” context in a video. In contrast to the forward pass, each position is treated as a starting point of the proposal. Finally, the confidence scores from both passes are fused by multiplication of corresponding scores for each proposal at each time step, and, then, filtered according to a predefined threshold. Finally, we obtain a set of $N_V$ event proposals for caption generation $P_V=\lbrace p_j = (\text{start}_j, \text{end}_j, \text{score}_j)\rbrace _{j=1}^{N_V}$. Proposed Framework ::: Captioning Module In this section we explain the captioning based for an example modality, namely, visual. Given a video $V$ and a set of proposals $P_V$ from the event localization module, the task of the captioning module is to provide a caption for each proposal in $P_V$. In order to extract features from a video $V$, we employ I3D network BIBREF46 pre-trained on the Kinetics dataset which produces 1024-d features. The gap between the extracted features and the generated captions is filled with Transformer BIBREF3 architecture which was proven to effectively encode and decode the information in a sequence-to-sequence setting. Proposed Framework ::: Captioning Module ::: Feature Transformer As shown in Fig. FIGREF6, Feature Transformer architecture mainly consists of three blocks: an encoder, decoder, and generator. The encoder inputs a set of extracted features $ \mathbf {v}^j = (v_1, v_2, \dots , v_{T_j}) $ temporally corresponding to a proposal $p_j$ from $P_V$ and maps it to a sequence of internal representations $ \mathbf {z}^j = (z_1, z_2, \dots , z_{T_j}) $. The decoder is conditioned on the output of the encoder $\mathbf {z}^j$ and the embedding $ \mathbf {e}^j_{\leqslant t} = (e_1, e_2, \dots , e_t)$ of the words in a caption $ \mathbf {w}^j_{\leqslant t} = (w_1, w_2, \dots , w_t) $. It produces the representation $ \mathbf {g}^j_{\leqslant t} = (g_1, g_2, \dots , g_t) $ which, in turn, is used by the generator to model a distribution over a vocabulary for the next word $ p(w_{t+1}|\mathbf {g}^j_{\leqslant t}) $. The next word is selected greedily by obtaining the word with the highest probability until a special ending token is sampled. The captioning is initialized with a starting token. Both are added to the vocabulary. Before providing an overview of the encoder, decoder, and generator, we presenting the notion of multi-headed attention that acts as an essential part of the decoder and encoder blocks. The concept of the multi-head attention, in turn, heavily relies on dot-product attention which we describe next. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Dot-product Attention The idea of the multi-headed attention rests on the scaled dot-product attention which calculates the weighted sum of values. The weights are obtained by applying the softmax function on the dot-product of each pair of rows of queries and keys scaled by $\frac{1}{\sqrt{D_k}}$. The scaling is done to prevent the softmax function from being in the small gradient regions BIBREF3. Formally the scaled dot-product attention can be represented as follows where $Q, K, V $ are queries, keys, and values, respectively. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Multi-headed Attention The multi-headed attention block is used once in each encoder layer and twice in each decoder layer. The block consists of $H$ heads that allows to cooperatively account for information from several representations sub-spaces at every position while preserving the same computation complexity BIBREF3. In a transformer with dimension $D_T$, each head is defined in the following way where $q, k, v$ are matrices which have $D_T$ columns and the number of rows depending on the position of the multi-headed block, yet with the same number of rows for $k$ and $v$ to make the calculation in (DISPLAY_FORM11) to be feasible. The $W^{q}_h, W^{k}_h, W^{v}_h \in \mathbb {R}^{D_T \times D_k}$ are trainable projection matrices that map $q, k , v$ from $D_T$ into $D_k= \frac{D_T}{H}$, asserting $D_T$ is a multiple of $H$. The multi-head attention, in turn, is the concatenation of all attention heads mapped back into $D_T$ by trainable parameter matrix $W^o \in \mathbb {R}^{D_k \cdot H \times D_T}$: Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Encoder The encoder consists of $ L $ layers. The first layer inputs a set of features $ \mathbf {v}^j $ and outputs an internal representation $ \mathbf {z}_1^j \in \mathbb {R}^{T_j \times D_T} $ while each of the next layers treats the output of a previous layer as its input. Each encoder layer $l$ consist of two sub-layers: multi-headed attention and position-wise fully connected network which are explained later in this section. The input to both sub-layers are normalized using layer normalization BIBREF48, each sub-layer is surrounded by a residual connection BIBREF49 (see Fig. FIGREF6). Formally, the $l$-th encoder layer has the following definition where $\text{FCN}$ is the position-wise fully connected network. Note, the multi-headed attention has identical queries, keys, and values ($ \overline{\mathbf {z}}_l^j $). Such multi-headed attention block is also referred to as self-multi-headed attention. It enables an encoder layer $l$ to account for the information from all states from the previous layer $ \mathbf {z}_{l-1}^j$. This property contrasts with the idea of RNN which accumulates only the information from the past positions. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Decoder Similarly to the encoder, the decoder has $ L $ layers. At a position $t$, the decoder inputs a set of embedded words $\mathbf {e}^j_{\leqslant t}$ with the output of the encoder $\mathbf {z}^j$ and sends the output to the next layer which is conditioned on this output and, again, the encoder output $\mathbf {z}^j$. Eventually, the decoder producing its internal representation $\mathbf {g}_{\leqslant t}^j \in \mathbb {R}^{t \times D_T}$. The decoder block is similar to the encoder but has an additional sub-layer that applies multi-headed attention on the encoder output and the output of its previous sub-layer. The decoder employs the layer normalization and residual connections at all three sub-layers in the same fashion as the encoder. Specifically, the $l$-th decoder layer has the following form: where $ \mathbf {z}^j $ is the encoder output. Note, similarly to the encoder, (DISPLAY_FORM18) is a self-multi-headed attention function while the second multi-headed attention block attends on both the encoder and decoder and is also referred to as encoder-decoder attention. This block enables each layer of the decoder to attend all state of the encoder's output $ \mathbf {z}^j$. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Position-wise Fully-Connected Network The fully connected network is used in each layer of the encoder and the decoder. It is a simple two-layer neural network that inputs $x$ with the output of the multi-head attention block, and, then, projects each row (or position) of the input $x$ from $D_T$ space onto $D_P$, $(D_P > D_T)$ and back, formally: where $W_1 \in \mathbb {R}^{D_T \times D_P}$, $W_2 \in \mathbb {R}^{D_P \times D_T}$, and biases $b_1, b_2$ are trainable parameters, $\text{ReLU}$ is a rectified linear unit. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Generator At the position $t$, the generator consumes the output of the decoder $\mathbf {g}^j_{\leqslant t}$ and produces a distribution over the vocabulary of words $p(w_{t+1}| \mathbf {g}^j_{\leqslant t})$. To obtain the distribution, the generator applies the softmax function of the output of a fully connected layer with a weight matrix $W_G \in \mathbb {R}^{D_T \times D_V}$ where $D_V$ is a vocabulary size. The word with the highest probability is selected as the next one. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Input Embedding and Positional Encoding Since the representation of textual data is usually sparse due to a large vocabulary, the dimension of the input of a neural language model is reduced with an embedding into a dimension of a different size, namely $D_T$. Also, following BIBREF3, we multiply the embedding weights by $\sqrt{D_T}$. The position encoding is required to allow the transformer to have a sense of the order in an input sequence. We adopt the approach proposed for a transformer architecture, i. e. we add the output of the combination of sine and cosine functions to the embedded input sequence BIBREF3. Proposed Framework ::: Captioning Module ::: Multi-modal Dense Video Captioning In this section, we present the multi-modal dense video captioning module which, utilises visual, audio, and speech modalities. See Fig. FIGREF6 for a schematic representation of the module. For the sake of speech representation $\mathbf {s}^j = (s_1, s_2, \dots , s_{T_j^s})$, we use the text embedding of size 512-d that is similar to the one which is employed in the embedding of a caption $\mathbf {w}^j_{\leqslant t}$. To account for the audio information, given a proposal $p_j$ we extract a set of features $\mathbf {a}_j = (a_1, a_2, \dots , a_{T_j^a})$ applying the 128-d embedding layer of the pre-trained VGGish network BIBREF47 on an audio track. While the visual features $\mathbf {v}^j = (v_1, v_2, \dots v_{T_j^v}) $ are encoded with 1024-d vectors by Inflated 3D (I3D) convolutional network BIBREF46. To fuse the features, we create an encoder and a decoder for each modality with dimensions corresponding to the size of the extracted features. The outputs from all decoders are fused inside of the generator, and the distribution of a next word $w_{t+1}$ is formed. In our experimentation, we found that a simple two-layer fully-connected network applied of a matrix of concatenated features performs the best with the ReLU activation after the first layer and the softmax after the second one. Each layer of the network has a matrix of trainable weights: $W_{F_1} \in \mathbb {R}^{D_F \times D_V}$ and $W_{F_2} \in \mathbb {R}^{D_V \times D_V}$ with $D_F = 512 + 128 + 1024 $ and $D_V$ is a vocabulary size. Proposed Framework ::: Model Training As the training is conducted using mini-batches of size 28, the features in one modality must be of the same length so the features could be stacked into a tensor. In this regard, we pad the features and the embedded captions to match the size of the longest sample. The model is trained by optimizing the Kullback–Leibler divergence loss which measures the “distance” between the ground truth and predicted distributions and averages the values for all words in a batch ignoring the masked tokens. Since many words in the English language may have several synonyms or human annotation may contain mistakes, we undergo the model to be less certain about the predictions and apply Label Smoothing BIBREF50 with the smoothing parameter $\gamma $ on the ground truth labels to mitigate this. In particular, the ground truth distribution over the vocabulary of size $D_V$, which is usually represented as one-hot encoding vector, the identity is replaced with probability $1-\gamma $ while the rest of the values are filled with $\frac{\gamma }{D_V-1}$. During training, we exploit the teacher forcing technique which uses the ground truth sequence up to position $t$ as the input to predict the next word instead of using the sequence of predictions. As we input the whole ground truth sequence at once and predicting the next words at each position, we need to prevent the transformer from peeping for the information from the next positions as it attends to all positions of the input. To mitigate this, we apply masking inside of the self-multi-headed attention block in the decoder for each position higher than $t-1$, following BIBREF3. The details on the feature extraction and other implementation details are available in the supplementary materials. Experiments ::: Dataset We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. Each video, on average, contains 3.65 temporally localized captions, around 13.65 words each, and two minutes long. In addition, each video in the validation set is annotated twice by different annotators. We report all results using the validation set (no ground truth is provided for the test set). The dataset itself is distributed as a collection of links to YouTube videos, some of which are no longer available. Authors provide pre-computed C3D features and frames at 5fps, but these are not suitable for our experiments. At the time of writing, we found 9,167 (out of 10,009) training and 4,483 (out of 4,917) validation videos which is, roughly, 91 % of the dataset. Out of these 2,798 training and 1,374 validation videos (approx. 28 %) contain at least one speech segment. The speech content was obtained from the closed captions (CC) provided by the YouTube ASR system which can be though as subtitles. Experiments ::: Metrics We are evaluating the performance of our model using BLEU@N BIBREF51 and METEOR BIBREF52. We regard the METEOR as our primary metric as it has been shown to be highly correlated with human judgement in a situation with a limited number of references (only one, in our case). We employ the official evaluation script provided in BIBREF53. Thus, the metrics are calculated if a proposed event and a ground truth location of a caption overlaps more than a specified temporal Intersection over Union (tIoU) and zero otherwise. All metric values are averaged for every video, and, then, for every threshold tIoU in $[0.3, 0.5, 0.7, 0.9]$. On the validation, we average the resulting scores for both validation sets. For the learned proposal setting, we report our results on at most 100 proposals per video. Notably, up to early 2017, the evaluation code had an issue which previously overestimated the performance of the algorithms in the learned proposal setting BIBREF9. Therefore, we report the results using the new evaluation code. Experiments ::: Comparison with Baseline Methods We compare our method with five related approaches, namely Krishna BIBREF2, Wang BIBREF5, Zhou BIBREF4, Li BIBREF6, and Rahman BIBREF38. We take the performance values from the original papers, except for BIBREF6, and BIBREF4, which are taken from BIBREF9 due to the evaluation issue (see Sec. SECREF27). The lack of access to the full ActivityNet Captions dataset makes strictly fair comparison difficult as we have less training and validation videos. Nevertheless, we present our results in two set-ups: 1) full validation set with random input features for missing entries, and 2) videos with all three modalities present (video, audio, and speech). The first one is chosen to indicate the lower bound of our performance with the full dataset. Whereas, the second one (referred to as “no missings”) concentrates on the multi-modal setup, which is the main contribution of our work. The obtained results are presented in Tab. TABREF25. Our method (MDVC) achieves comparable or better performance, even though we have access to smaller training set and 9 % of the validation videos are missing (replaced with random input features). Furthermore, if all three modalities are present, our method outperforms all baseline approaches in the case of both GT and learned proposals. Notably, we outperform BIBREF4 which is also based on the transformer architecture and account for the optical flow. This shows the superior performance of our captioning module which, yet, trained on the smaller amount of data. Experiments ::: Ablation Studies In this section, we perform an ablation analysis highlighting the effect of different design choices of our method. For all experiments, we use the full unfiltered ActivityNet Captions validation set with ground truth event proposals. Firstly, we assess the selection of the model architecture. To this end, we implemented a version of our method where the transformer was replaced by Bidirectional Recurrent Neural Network with Gated Recurrent Units with attention (Bi-GRU), proposed in BIBREF54. To distil the effect of the change in architecture, the results are shown for visual-only models. Both Bi-GRU and the transformer input I3D features extracted from 64 RGB and optical flow frames (the final model inputs 24 frames). Finally, we set a lower bound for the feature performance by training a transformer model with random video features. Tab. TABREF32 shows the comparison. To conclude, we observe that the feature transformer-based model is not only lighter but also achieves better performance in dense video captioning task. Moreover, both method clearly surpasses the random baseline. Secondly, we evaluate the contribution of different modalities in our framework. Tab. TABREF33 contains the results for different modality configurations as well as for two feature fusion approaches. Specifically, averaging of the output probabilities and concatenation of the outputs of all modalities and applying two fully connected (FC) layers on top. We observe that audio-only model has the worst performance, followed by the visual only model, and the combination of these two. Moreover, the concatenation and FC layers result in better performance than averaging. To further assess if the performance gain is due to the additional modalities or to the extra capacity in the FC layers, we trained a visual-only model with two additional FC layers. The results indicate that such configuration performs worse than any bi-modal setup. Overall, we conclude that the final model with all three modalities performs best among all tested set-ups, which highlights the importance of multi-modal setting in dense video captioning task. Fig. FIGREF29 shows a qualitative comparison between different models in our ablation study. Moreover, we provide the corresponding captions from the best performing baseline method (Zhuo BIBREF4). We noticed the following pattern: the audio-modality produces coherent sentences and captures the concepts of speaking in the video. However, there are clear mistakes in the caption content. In contrast, the model with all three modalities manages to capture the man who speaks to the camera which is also present in the ground truth. Both visual-only MDVC and Zhuo struggle to describe the audio details. Finally, to test whether our model improves the performance in general rather than in a specific video category, we report the comparison of the different versions of MDVC per category. To this end, we retrieve the category labels from the YouTubeAPI BIBREF12 (US region) for every available ActivityNet Captions validation video. These labels are given by the user when uploading the video and roughly represent the video content type. The comparison is shown in Fig. FIGREF31. The results imply a consistent gain in performance within each category except for categories: “Film & Animation” and “Travel & Events” which might be explained by the lack of correspondence between visual and audio tracks. Specifically, the video might be accompanied by music, e. g. promotion of a resort. Also, “Film & Animation” contains cartoon-like movies which might have a realistic soundtrack while the visual track is goofy. Conclusion The use of different modalities in computer vision is still an underrepresented topic and, we believe, deserves more attention. In this work, we introduced a multi-modal dense video captioning module (MDVC) and shown the importance of the audio and speech modalities for dense video captioning task. Specifically, MDVC is based on the transformer architecture which encodes the feature representation of each modality for a specific event proposal and produces a caption using the information from these modalities. The experimentation, conducted employing the ActivityNet Captions dataset, shows the superior performance of a captioning module to the visual-only models in the existing literature. Extensive ablation study verifies this conclusion. We believe that our results firmly indicate that future works in video captioning should utilize a multi-modal input. Supplementary Material The supplementary material consists of four sections. In Section SECREF35, we provide qualitative results of the MDVC on another example video. The details on features extraction and implementation are described in Section SECREF36 and SECREF38. Finally, the comparison with other methods is shown in Section SECREF39. Supplementary Material ::: Qualitative Results (Another Example) In Figure FIGREF34, we provide qualitative analysis of captioning on another video from ActivityNet Captions validation set to emphasize the importance of additional modalities for dense video captioning, namely, speech and audio. We compare the captioning proposed by MDVC (our model) conditioned on different sets of modalities: audio-only (A-only), visual-only (V-only), and including all modalities (S + A + V). Additionally, we provide the results of a captioning model proposed in Zhou BIBREF4 (visual only) which showed the most promising results according to METEOR. More precisely, the video (YouTube video id: EGrXaq213Oc) lasts two minutes and contains 12 human annotations. The video is an advertisement for snowboarding lessons for children. It shows examples of children successfully riding a snowboard on a hill and supportive adults that help them to learn. A lady narrates the video and appears in the shot a couple of times. Generally, we may observe that MDVC with the audio modality alone (A-only) mostly describes that a woman is speaking which is correct according to the audio content yet the details about snowboarding and children are missing. This is expectedly challenging for the network as no related sound effects to snowboarding are present. In the meantime, the visual-only MDVC grasps the content well, however, misses important details like the gender of the speaker. While the multi-modal model MDVC borrows the advantages of both which results in more accurate captions. The benefits of several modalities stand out in captions for $p_2$ and $p_{10}$ segments. Note that despite the appearance of the lady in the shot during $p_{10}$, the ground truth caption misses it yet our model manages to grasp it. Yet, some limitations of the final model could be noticed as well. In particular, the content of some proposals is dissimilar to the generated captions, e. g. the color of the jacket ($p_4$, $p_5$), or when a lady is holding a snowboard with a child on it while the model predicts that she is holding a ski ($p_7$). Also, the impressive tricks on a snowboard were guessed simply as “ridding down a hill” which is not completely erroneous but still inaccurate ($p_8$). Overall, the model makes reasonable mistakes except for proposals $p_3$ and $p_4$. Finally, the generated captions provide more general description of a scene compared to the ground truth that is detailed and specific which could be a subject for future investigation. Supplementary Material ::: Details on Feature Extraction Before training, we pre-calculate the features for both audio and visual modalities. In particular, the audio features were extracted using VGGish BIBREF47 which was trained on AudioSet BIBREF55. The input to the VGGish model is a $96\times 64$ log mel-scaled spectrogram extracted for non-overlapping $0.96$ seconds segments. The log mel-scaled spectrogram is obtained by applying Short-Time Fourier Transform on a 16 kHz mono audio track using a periodic Hann window with 25 ms length with 10 ms overlap. The output is a 128-d feature vector after an activation function and extracted before a classification layer. Therefore, the input to MDVC is a matrix with dimension $T_j^a \times 128$ where $T_j^a$ is the number of features proposal $p_j$ consists of. The visual features were extracted using I3D BIBREF46 network which inputs a set of 24 RGB and optical flow frames extracted at 25 fps. The optical flow is extracted with PWC-Net BIBREF58. First, each frame is resized such that the shortest side is 256 pixels. Then, the center region is cropped to obtain $224\times 224$ frames. Both RGB and flow stacks are passed through the corresponding branch of I3D. The output of each branch are summed together producing 1024-d features for each stack of 24 frames. Hence, the resulting matrix has the shape: $T_j^v\times 1024$, where $T_j^v$ is the number of features required for a proposal $p_j$. We use 24 frames for I3D input to temporally match with the input of the audio modality as $\frac{24}{25} = 0.96$. Also note that I3D was pre-trained on the Kinetics dataset with inputs of 64 frames, while we use 24 frames. This is a valid approach since we employ the output of the second to the last layer after activation and average it on the temporal axis. The input for speech modality is represented by temporally allocated text segments in the English language (one could think of them as subtitles). For a proposal $ p_j $, we pick all segments that both: a) end after the proposal starting point, and b) start before the proposal ending point. This provides us with sufficient coverage of what has been said during the proposal segment. Similarly to captions, each word in a speech segment is represented as a number which corresponds to the word's order number in the vocabulary and then passed through the text embedding of size 512. We omit the subtitles that describe the sound like “[Applause]” and “[Music]” as we are only interested in the effect of the speech. Therefore, the speech transformer encoder inputs matrices of shape: $T^s_j\times 512$ where $T^s_j$ is the number of words in corresponding speech for proposal $p_j$. Supplementary Material ::: Implementation Details Since no intermediate layers connecting the features and transformers are used, the dimension of the features transformers $D_T$ corresponds to the size of the extracted features: 512, 128, and 1024 for speech, audio, and visual modalities, respectively. Each feature transformer has one layer ($L$), while the internal layer in the position-wise fully-connected network has $D_P=2048$ units for all modality transformers which was found to perform optimally. We use $H=4$ heads in all multi-headed attention blocks. The captions and speech vocabulary sizes are 10,172 and 23,043, respectively. In all experiments, except for the audio-only model, we use Adam optimizer BIBREF56, a batch containing features for 28 proposals, learning rate $10^{-5}$, $\beta = (0.9, 0.99)$, smoothing parameter $\gamma = 0.7$. In the audio-only model, we apply two-layered transformer architecture with learning rate $10^{-4}$ and $\gamma = 0.2$. To regularize the weights of the model, in every experiment, Dropout BIBREF57 with $p = 0.1$ is applied to the outputs of positional encoding, in every sub-layer before adding a residual, and after the first internal layer of the multi-modal generator. During the experimentation, models were trained for 200 epochs at most and stopped the training early if for 50 consecutive epochs the average METEOR score calculated on ground truth event proposals of both validation sets has not improved. At the end of the training, we employ the best model to estimate its performance on the learned temporal proposals. Usually the training for the best models culminated by 50th epoch, e. g. the final model (MDVC (S + A + V)) was trained for 30 epochs which took, roughly, 15 hours on one consumer-type GPU (Nvidia GeForce RTX 2080 Ti). The code for training heavily relies on PyTorch framework and will be released upon publication. Supplementary Material ::: Comparison with Other Methods In Table TABREF37, we present a comparison with another body of methods BIBREF8, BIBREF9 which were not included in the main comparison as they were using Reinforcement Learning (RL) approach to directly optimize the non-differentiable metric (METEOR). We believe that our method could also benefit from these as the ablation studies BIBREF8, BIBREF9 show significant gains obtained by applying them. As it was anticipated, in general, methods which employ reinforcement learning perform better in terms of METEOR. Interestingly, our model still outperforms BIBREF8 which uses RL in the captioning module.
YouTube videos
7011b26ffc54769897e4859e4932aeddfab82c9f
7011b26ffc54769897e4859e4932aeddfab82c9f_0
Q: What ASR system do they use? Text: Introduction The substantial amount of freely available video material has brought up the need for automatic methods to summarize and compactly represent the essential content. One approach would be to produce a short video skim containing the most important video segments as proposed in the video summarization task BIBREF0. Alternatively, the video content could be described using natural language sentences. Such an approach can lead to a very compact and intuitive representation and is typically referred to as video captioning in the literature BIBREF1. However, producing a single description for an entire video might be impractical for long unconstrained footage. Instead, dense video captioning BIBREF2 aims, first, at temporally localizing events and, then, at producing natural language description for each of them. Fig. FIGREF1 illustrates dense video captions for an example video sequence. Most recent works in dense video captioning formulate the captioning problem as a machine translation task, where the input is a set of features extracted from the video stream and the output is a natural language sentence. Thus, the captioning methods can be leveraged by recent developments in machine translation field, such as Transformer model BIBREF3. The main idea in the transformer is to utilise self-attention mechanism to model long-term dependencies in a sequence. We follow the recent work BIBREF4 and adopt the transformer architecture in our dense video captioning model. The vast majority of previous works are generating captions purely based on visual information BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, almost all videos include an audio track, which could provide vital cues for video understanding. In particular, what is being said by people in the video, might make a crucial difference to the content description. For instance, in a scene when someone knocks the door from an opposite side, we only see the door but the audio helps us to understand that somebody is behind it and wants to enter. Therefore, it is impossible for a model to make a useful caption for it. Also, other types of videos as instruction videos, sport videos, or video lectures could be challenging for a captioning model. In contrast, we build our model to utilize video frames, raw audio signal, and the speech content in the caption generation process. To this end, we deploy automatic speech recognition (ASR) system BIBREF11 to extract time-aligned captions of what is being said (similar to subtitles) and employ it alongside with video and audio representations in the transformer model. The proposed model is assessed using the challenging ActivityNet Captions BIBREF2 benchmark dataset, where we obtain competitive results to the current state-of-the-art. The subsequent ablation studies indicate a substantial contribution from audio and speech signals. Moreover, we retrieve and perform breakdown analysis by utilizing previously unused video category tags provided with the original YouTube videos BIBREF12. The program code of our model and the evaluation approach will be made publicly available. Related Work ::: Video Captioning Early works in video captioning applied rule-based models BIBREF13, BIBREF14, BIBREF15, where the idea was to identify a set of video objects and use them to fill predefined templates to generate a sentence. Later, the need for sentence templates was omitted by casting the captioning problem as a machine translation task BIBREF16. Following the success of neural models in translation systems BIBREF17, similar methods became widely popular in video captioning BIBREF18, BIBREF19, BIBREF20, BIBREF1, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. The rationale behind this approach is to train two Recurrent Neural Networks (RNNs) in an encoder-decoder fashion. Specifically, an encoder inputs a set of video features, accumulates its hidden state, which is passed to a decoder for producing a caption. To further improve the performance of the captioning model, several methods have been proposed, including shared memory between visual and textual domains BIBREF26, BIBREF27, spatial and temporal attention BIBREF28, reinforcement learning BIBREF29, semantic tags BIBREF30, BIBREF31, other modalities BIBREF32, BIBREF33, BIBREF34, BIBREF35, and by producing a paragraph instead of one sentence BIBREF36, BIBREF1. Related Work ::: Dense Video Captioning Inspired by the idea of the dense image captioning task BIBREF37, Krishna BIBREF2 introduced a problem of dense video captioning and released a new dataset called ActivityNet Captions which leveraged the research in the field BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF38, BIBREF10. In particular, BIBREF5 adopted the idea of the context-awareness BIBREF2 and generalized the temporal event proposal module to utilize both past and future contexts as well as an attentive fusion to differentiate captions from highly overlapping events. Meanwhile, the concept of Single Shot Detector (SSD) BIBREF39 was also used to generate event proposals and reward maximization for better captioning in BIBREF6. In order to mitigate the intrinsic difficulties of RNNs to model long-term dependencies in a sequence, Zhou BIBREF4 tailored the recent idea of Transformer BIBREF3 for dense video captioning. In BIBREF7 the authors noticed that the captioning may benefit from interactions between objects in a video and developed recurrent higher-order interaction module to model these interactions. Xiong BIBREF8 noticed that many previous models produced redundant captions, and proposed to generate captions in a progressive manner, conditioned on the previous caption while applying paragraph- and sentence-level rewards. Similarly, a “bird-view” correction and two-level reward maximization for a more coherent story-telling have been employed in BIBREF9. Since the human annotation of a video with temporal boundaries and captions for each of them can be laborious, several attempts have been made to address this issue BIBREF40, BIBREF41. Specifically, BIBREF40 employed the idea of cycle-consistency to translate a set of captions to a set of temporal events without any paired annotation, while BIBREF41 automatically-collected dataset of an unparalleled-scale exploiting the structure of instructional videos. The most similar work to our captioning model is BIBREF4 that also utilizes a version of the Transformer BIBREF3 architecture. However, their model is designed solely for visual features. Instead, we believe that dense video captioning may benefit from information from other modalities. Related Work ::: Multi-modal Dense Video Captioning A few attempts has been made to include additional cues like audio and speech BIBREF38, BIBREF42, BIBREF43 for dense video captioning task. Rahman BIBREF38 utilized the idea of cycle-consistency BIBREF40 to build a model with visual and audio inputs. However, due to weak supervision, the system did not reach high performance. Hessel BIBREF42 and Shi BIBREF43 employ a transformer architecture BIBREF3 to encode both video frames and speech segments to generate captions for instructional (cooking) videos. Yet, the high results on a dataset which is restricted to instructional video appear to be not evidential as the speech and the captions are already very close to each other in such videos BIBREF41. In contrast to the mentioned multi-modal dense video captioning methods: (1) we present the importance of the speech and audio modalities on a domain-free dataset, (2) propose a multi-modal dense video captioning module (MDVC) which can be scaled to any number of modalities. Proposed Framework In this section, we briefly outline the workflow of our method referred to as Multi-modal Dense Video Captioning (MDVC) which is shown in Fig. FIGREF5. The goal of our method is to temporally localize events on a video and to produce a textual description for each of them. To this end, we apply a two-stage approach. Firstly, we obtain the temporal event locations. For this task, we employ the Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5. Bi-SST applies 3D Convolution network (C3D) BIBREF44 to video frames and extracts features that are passed to subsequent bi-directional LSTM BIBREF45 network. The LSTM accumulates visual cues over time and predicts confidence scores for each location to be start/end point of an event. Finally, a set of event proposals (start/end times) is obtained and passed to the second stage for caption generation. Secondly, we generate the captions given a proposal. To produce inputs from audio, visual, and speech modalities, we use Inflated 3D convolutions (I3D) BIBREF46 for visual and VGGish network BIBREF47 for audio modalities. For speech representation as a text, we employ an external ASR system BIBREF11. To represent the text into a numerical form, we use a similar text embedding which is used for caption encoding. The features are, then, fed to individual transformer models along with the words of a caption from the previous time steps. The output of the transformer is passed into a generator which fuses the outputs from all modalities and estimates a probability distribution over the word vocabulary. After sampling the next word, the process is repeated until a special end token is obtained. Fig. FIGREF1 illustrates an example modality and the corresponding event captions. Proposed Framework ::: Temporal Event Localization Module An event localization module is dedicated to generating a set of temporal regions which might contain an event. To achieve this, we employ pre-trained Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5 as it has is been shown to reach good performance in the proposal generation task. Bi-SST inputs a sequence of $T$ RGB frames from a video $V = (x_1, x_2, \dots , x_F)$ and extracts a set of 4096-d features $V^{\prime } = (f_1, f_2, \dots , f_T)$ by applying a 3D Convolution network (C3D) on non-overlapping segments of size 16 with a stride of 64 frames. To reduce the feature dimension, only 500 principal components were selected using PCA. To account for the video context, events are proposed during forward and backward passes on a video sequence $V^{\prime }$, and, then, the resulting scores are fused together to obtain the final proposal set. Specifically, during the forward pass, LSTM is used to accumulate the visual clues from the “past” context at each position $t$ which is treated as an ending point and produce confidence scores for each proposal. Afterwards, a similar procedure is performed during the backward pass where the features $V^{\prime }$ are used in a reversed order. This empowers the model to have a sense of the “future” context in a video. In contrast to the forward pass, each position is treated as a starting point of the proposal. Finally, the confidence scores from both passes are fused by multiplication of corresponding scores for each proposal at each time step, and, then, filtered according to a predefined threshold. Finally, we obtain a set of $N_V$ event proposals for caption generation $P_V=\lbrace p_j = (\text{start}_j, \text{end}_j, \text{score}_j)\rbrace _{j=1}^{N_V}$. Proposed Framework ::: Captioning Module In this section we explain the captioning based for an example modality, namely, visual. Given a video $V$ and a set of proposals $P_V$ from the event localization module, the task of the captioning module is to provide a caption for each proposal in $P_V$. In order to extract features from a video $V$, we employ I3D network BIBREF46 pre-trained on the Kinetics dataset which produces 1024-d features. The gap between the extracted features and the generated captions is filled with Transformer BIBREF3 architecture which was proven to effectively encode and decode the information in a sequence-to-sequence setting. Proposed Framework ::: Captioning Module ::: Feature Transformer As shown in Fig. FIGREF6, Feature Transformer architecture mainly consists of three blocks: an encoder, decoder, and generator. The encoder inputs a set of extracted features $ \mathbf {v}^j = (v_1, v_2, \dots , v_{T_j}) $ temporally corresponding to a proposal $p_j$ from $P_V$ and maps it to a sequence of internal representations $ \mathbf {z}^j = (z_1, z_2, \dots , z_{T_j}) $. The decoder is conditioned on the output of the encoder $\mathbf {z}^j$ and the embedding $ \mathbf {e}^j_{\leqslant t} = (e_1, e_2, \dots , e_t)$ of the words in a caption $ \mathbf {w}^j_{\leqslant t} = (w_1, w_2, \dots , w_t) $. It produces the representation $ \mathbf {g}^j_{\leqslant t} = (g_1, g_2, \dots , g_t) $ which, in turn, is used by the generator to model a distribution over a vocabulary for the next word $ p(w_{t+1}|\mathbf {g}^j_{\leqslant t}) $. The next word is selected greedily by obtaining the word with the highest probability until a special ending token is sampled. The captioning is initialized with a starting token. Both are added to the vocabulary. Before providing an overview of the encoder, decoder, and generator, we presenting the notion of multi-headed attention that acts as an essential part of the decoder and encoder blocks. The concept of the multi-head attention, in turn, heavily relies on dot-product attention which we describe next. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Dot-product Attention The idea of the multi-headed attention rests on the scaled dot-product attention which calculates the weighted sum of values. The weights are obtained by applying the softmax function on the dot-product of each pair of rows of queries and keys scaled by $\frac{1}{\sqrt{D_k}}$. The scaling is done to prevent the softmax function from being in the small gradient regions BIBREF3. Formally the scaled dot-product attention can be represented as follows where $Q, K, V $ are queries, keys, and values, respectively. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Multi-headed Attention The multi-headed attention block is used once in each encoder layer and twice in each decoder layer. The block consists of $H$ heads that allows to cooperatively account for information from several representations sub-spaces at every position while preserving the same computation complexity BIBREF3. In a transformer with dimension $D_T$, each head is defined in the following way where $q, k, v$ are matrices which have $D_T$ columns and the number of rows depending on the position of the multi-headed block, yet with the same number of rows for $k$ and $v$ to make the calculation in (DISPLAY_FORM11) to be feasible. The $W^{q}_h, W^{k}_h, W^{v}_h \in \mathbb {R}^{D_T \times D_k}$ are trainable projection matrices that map $q, k , v$ from $D_T$ into $D_k= \frac{D_T}{H}$, asserting $D_T$ is a multiple of $H$. The multi-head attention, in turn, is the concatenation of all attention heads mapped back into $D_T$ by trainable parameter matrix $W^o \in \mathbb {R}^{D_k \cdot H \times D_T}$: Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Encoder The encoder consists of $ L $ layers. The first layer inputs a set of features $ \mathbf {v}^j $ and outputs an internal representation $ \mathbf {z}_1^j \in \mathbb {R}^{T_j \times D_T} $ while each of the next layers treats the output of a previous layer as its input. Each encoder layer $l$ consist of two sub-layers: multi-headed attention and position-wise fully connected network which are explained later in this section. The input to both sub-layers are normalized using layer normalization BIBREF48, each sub-layer is surrounded by a residual connection BIBREF49 (see Fig. FIGREF6). Formally, the $l$-th encoder layer has the following definition where $\text{FCN}$ is the position-wise fully connected network. Note, the multi-headed attention has identical queries, keys, and values ($ \overline{\mathbf {z}}_l^j $). Such multi-headed attention block is also referred to as self-multi-headed attention. It enables an encoder layer $l$ to account for the information from all states from the previous layer $ \mathbf {z}_{l-1}^j$. This property contrasts with the idea of RNN which accumulates only the information from the past positions. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Decoder Similarly to the encoder, the decoder has $ L $ layers. At a position $t$, the decoder inputs a set of embedded words $\mathbf {e}^j_{\leqslant t}$ with the output of the encoder $\mathbf {z}^j$ and sends the output to the next layer which is conditioned on this output and, again, the encoder output $\mathbf {z}^j$. Eventually, the decoder producing its internal representation $\mathbf {g}_{\leqslant t}^j \in \mathbb {R}^{t \times D_T}$. The decoder block is similar to the encoder but has an additional sub-layer that applies multi-headed attention on the encoder output and the output of its previous sub-layer. The decoder employs the layer normalization and residual connections at all three sub-layers in the same fashion as the encoder. Specifically, the $l$-th decoder layer has the following form: where $ \mathbf {z}^j $ is the encoder output. Note, similarly to the encoder, (DISPLAY_FORM18) is a self-multi-headed attention function while the second multi-headed attention block attends on both the encoder and decoder and is also referred to as encoder-decoder attention. This block enables each layer of the decoder to attend all state of the encoder's output $ \mathbf {z}^j$. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Position-wise Fully-Connected Network The fully connected network is used in each layer of the encoder and the decoder. It is a simple two-layer neural network that inputs $x$ with the output of the multi-head attention block, and, then, projects each row (or position) of the input $x$ from $D_T$ space onto $D_P$, $(D_P > D_T)$ and back, formally: where $W_1 \in \mathbb {R}^{D_T \times D_P}$, $W_2 \in \mathbb {R}^{D_P \times D_T}$, and biases $b_1, b_2$ are trainable parameters, $\text{ReLU}$ is a rectified linear unit. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Generator At the position $t$, the generator consumes the output of the decoder $\mathbf {g}^j_{\leqslant t}$ and produces a distribution over the vocabulary of words $p(w_{t+1}| \mathbf {g}^j_{\leqslant t})$. To obtain the distribution, the generator applies the softmax function of the output of a fully connected layer with a weight matrix $W_G \in \mathbb {R}^{D_T \times D_V}$ where $D_V$ is a vocabulary size. The word with the highest probability is selected as the next one. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Input Embedding and Positional Encoding Since the representation of textual data is usually sparse due to a large vocabulary, the dimension of the input of a neural language model is reduced with an embedding into a dimension of a different size, namely $D_T$. Also, following BIBREF3, we multiply the embedding weights by $\sqrt{D_T}$. The position encoding is required to allow the transformer to have a sense of the order in an input sequence. We adopt the approach proposed for a transformer architecture, i. e. we add the output of the combination of sine and cosine functions to the embedded input sequence BIBREF3. Proposed Framework ::: Captioning Module ::: Multi-modal Dense Video Captioning In this section, we present the multi-modal dense video captioning module which, utilises visual, audio, and speech modalities. See Fig. FIGREF6 for a schematic representation of the module. For the sake of speech representation $\mathbf {s}^j = (s_1, s_2, \dots , s_{T_j^s})$, we use the text embedding of size 512-d that is similar to the one which is employed in the embedding of a caption $\mathbf {w}^j_{\leqslant t}$. To account for the audio information, given a proposal $p_j$ we extract a set of features $\mathbf {a}_j = (a_1, a_2, \dots , a_{T_j^a})$ applying the 128-d embedding layer of the pre-trained VGGish network BIBREF47 on an audio track. While the visual features $\mathbf {v}^j = (v_1, v_2, \dots v_{T_j^v}) $ are encoded with 1024-d vectors by Inflated 3D (I3D) convolutional network BIBREF46. To fuse the features, we create an encoder and a decoder for each modality with dimensions corresponding to the size of the extracted features. The outputs from all decoders are fused inside of the generator, and the distribution of a next word $w_{t+1}$ is formed. In our experimentation, we found that a simple two-layer fully-connected network applied of a matrix of concatenated features performs the best with the ReLU activation after the first layer and the softmax after the second one. Each layer of the network has a matrix of trainable weights: $W_{F_1} \in \mathbb {R}^{D_F \times D_V}$ and $W_{F_2} \in \mathbb {R}^{D_V \times D_V}$ with $D_F = 512 + 128 + 1024 $ and $D_V$ is a vocabulary size. Proposed Framework ::: Model Training As the training is conducted using mini-batches of size 28, the features in one modality must be of the same length so the features could be stacked into a tensor. In this regard, we pad the features and the embedded captions to match the size of the longest sample. The model is trained by optimizing the Kullback–Leibler divergence loss which measures the “distance” between the ground truth and predicted distributions and averages the values for all words in a batch ignoring the masked tokens. Since many words in the English language may have several synonyms or human annotation may contain mistakes, we undergo the model to be less certain about the predictions and apply Label Smoothing BIBREF50 with the smoothing parameter $\gamma $ on the ground truth labels to mitigate this. In particular, the ground truth distribution over the vocabulary of size $D_V$, which is usually represented as one-hot encoding vector, the identity is replaced with probability $1-\gamma $ while the rest of the values are filled with $\frac{\gamma }{D_V-1}$. During training, we exploit the teacher forcing technique which uses the ground truth sequence up to position $t$ as the input to predict the next word instead of using the sequence of predictions. As we input the whole ground truth sequence at once and predicting the next words at each position, we need to prevent the transformer from peeping for the information from the next positions as it attends to all positions of the input. To mitigate this, we apply masking inside of the self-multi-headed attention block in the decoder for each position higher than $t-1$, following BIBREF3. The details on the feature extraction and other implementation details are available in the supplementary materials. Experiments ::: Dataset We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. Each video, on average, contains 3.65 temporally localized captions, around 13.65 words each, and two minutes long. In addition, each video in the validation set is annotated twice by different annotators. We report all results using the validation set (no ground truth is provided for the test set). The dataset itself is distributed as a collection of links to YouTube videos, some of which are no longer available. Authors provide pre-computed C3D features and frames at 5fps, but these are not suitable for our experiments. At the time of writing, we found 9,167 (out of 10,009) training and 4,483 (out of 4,917) validation videos which is, roughly, 91 % of the dataset. Out of these 2,798 training and 1,374 validation videos (approx. 28 %) contain at least one speech segment. The speech content was obtained from the closed captions (CC) provided by the YouTube ASR system which can be though as subtitles. Experiments ::: Metrics We are evaluating the performance of our model using BLEU@N BIBREF51 and METEOR BIBREF52. We regard the METEOR as our primary metric as it has been shown to be highly correlated with human judgement in a situation with a limited number of references (only one, in our case). We employ the official evaluation script provided in BIBREF53. Thus, the metrics are calculated if a proposed event and a ground truth location of a caption overlaps more than a specified temporal Intersection over Union (tIoU) and zero otherwise. All metric values are averaged for every video, and, then, for every threshold tIoU in $[0.3, 0.5, 0.7, 0.9]$. On the validation, we average the resulting scores for both validation sets. For the learned proposal setting, we report our results on at most 100 proposals per video. Notably, up to early 2017, the evaluation code had an issue which previously overestimated the performance of the algorithms in the learned proposal setting BIBREF9. Therefore, we report the results using the new evaluation code. Experiments ::: Comparison with Baseline Methods We compare our method with five related approaches, namely Krishna BIBREF2, Wang BIBREF5, Zhou BIBREF4, Li BIBREF6, and Rahman BIBREF38. We take the performance values from the original papers, except for BIBREF6, and BIBREF4, which are taken from BIBREF9 due to the evaluation issue (see Sec. SECREF27). The lack of access to the full ActivityNet Captions dataset makes strictly fair comparison difficult as we have less training and validation videos. Nevertheless, we present our results in two set-ups: 1) full validation set with random input features for missing entries, and 2) videos with all three modalities present (video, audio, and speech). The first one is chosen to indicate the lower bound of our performance with the full dataset. Whereas, the second one (referred to as “no missings”) concentrates on the multi-modal setup, which is the main contribution of our work. The obtained results are presented in Tab. TABREF25. Our method (MDVC) achieves comparable or better performance, even though we have access to smaller training set and 9 % of the validation videos are missing (replaced with random input features). Furthermore, if all three modalities are present, our method outperforms all baseline approaches in the case of both GT and learned proposals. Notably, we outperform BIBREF4 which is also based on the transformer architecture and account for the optical flow. This shows the superior performance of our captioning module which, yet, trained on the smaller amount of data. Experiments ::: Ablation Studies In this section, we perform an ablation analysis highlighting the effect of different design choices of our method. For all experiments, we use the full unfiltered ActivityNet Captions validation set with ground truth event proposals. Firstly, we assess the selection of the model architecture. To this end, we implemented a version of our method where the transformer was replaced by Bidirectional Recurrent Neural Network with Gated Recurrent Units with attention (Bi-GRU), proposed in BIBREF54. To distil the effect of the change in architecture, the results are shown for visual-only models. Both Bi-GRU and the transformer input I3D features extracted from 64 RGB and optical flow frames (the final model inputs 24 frames). Finally, we set a lower bound for the feature performance by training a transformer model with random video features. Tab. TABREF32 shows the comparison. To conclude, we observe that the feature transformer-based model is not only lighter but also achieves better performance in dense video captioning task. Moreover, both method clearly surpasses the random baseline. Secondly, we evaluate the contribution of different modalities in our framework. Tab. TABREF33 contains the results for different modality configurations as well as for two feature fusion approaches. Specifically, averaging of the output probabilities and concatenation of the outputs of all modalities and applying two fully connected (FC) layers on top. We observe that audio-only model has the worst performance, followed by the visual only model, and the combination of these two. Moreover, the concatenation and FC layers result in better performance than averaging. To further assess if the performance gain is due to the additional modalities or to the extra capacity in the FC layers, we trained a visual-only model with two additional FC layers. The results indicate that such configuration performs worse than any bi-modal setup. Overall, we conclude that the final model with all three modalities performs best among all tested set-ups, which highlights the importance of multi-modal setting in dense video captioning task. Fig. FIGREF29 shows a qualitative comparison between different models in our ablation study. Moreover, we provide the corresponding captions from the best performing baseline method (Zhuo BIBREF4). We noticed the following pattern: the audio-modality produces coherent sentences and captures the concepts of speaking in the video. However, there are clear mistakes in the caption content. In contrast, the model with all three modalities manages to capture the man who speaks to the camera which is also present in the ground truth. Both visual-only MDVC and Zhuo struggle to describe the audio details. Finally, to test whether our model improves the performance in general rather than in a specific video category, we report the comparison of the different versions of MDVC per category. To this end, we retrieve the category labels from the YouTubeAPI BIBREF12 (US region) for every available ActivityNet Captions validation video. These labels are given by the user when uploading the video and roughly represent the video content type. The comparison is shown in Fig. FIGREF31. The results imply a consistent gain in performance within each category except for categories: “Film & Animation” and “Travel & Events” which might be explained by the lack of correspondence between visual and audio tracks. Specifically, the video might be accompanied by music, e. g. promotion of a resort. Also, “Film & Animation” contains cartoon-like movies which might have a realistic soundtrack while the visual track is goofy. Conclusion The use of different modalities in computer vision is still an underrepresented topic and, we believe, deserves more attention. In this work, we introduced a multi-modal dense video captioning module (MDVC) and shown the importance of the audio and speech modalities for dense video captioning task. Specifically, MDVC is based on the transformer architecture which encodes the feature representation of each modality for a specific event proposal and produces a caption using the information from these modalities. The experimentation, conducted employing the ActivityNet Captions dataset, shows the superior performance of a captioning module to the visual-only models in the existing literature. Extensive ablation study verifies this conclusion. We believe that our results firmly indicate that future works in video captioning should utilize a multi-modal input. Supplementary Material The supplementary material consists of four sections. In Section SECREF35, we provide qualitative results of the MDVC on another example video. The details on features extraction and implementation are described in Section SECREF36 and SECREF38. Finally, the comparison with other methods is shown in Section SECREF39. Supplementary Material ::: Qualitative Results (Another Example) In Figure FIGREF34, we provide qualitative analysis of captioning on another video from ActivityNet Captions validation set to emphasize the importance of additional modalities for dense video captioning, namely, speech and audio. We compare the captioning proposed by MDVC (our model) conditioned on different sets of modalities: audio-only (A-only), visual-only (V-only), and including all modalities (S + A + V). Additionally, we provide the results of a captioning model proposed in Zhou BIBREF4 (visual only) which showed the most promising results according to METEOR. More precisely, the video (YouTube video id: EGrXaq213Oc) lasts two minutes and contains 12 human annotations. The video is an advertisement for snowboarding lessons for children. It shows examples of children successfully riding a snowboard on a hill and supportive adults that help them to learn. A lady narrates the video and appears in the shot a couple of times. Generally, we may observe that MDVC with the audio modality alone (A-only) mostly describes that a woman is speaking which is correct according to the audio content yet the details about snowboarding and children are missing. This is expectedly challenging for the network as no related sound effects to snowboarding are present. In the meantime, the visual-only MDVC grasps the content well, however, misses important details like the gender of the speaker. While the multi-modal model MDVC borrows the advantages of both which results in more accurate captions. The benefits of several modalities stand out in captions for $p_2$ and $p_{10}$ segments. Note that despite the appearance of the lady in the shot during $p_{10}$, the ground truth caption misses it yet our model manages to grasp it. Yet, some limitations of the final model could be noticed as well. In particular, the content of some proposals is dissimilar to the generated captions, e. g. the color of the jacket ($p_4$, $p_5$), or when a lady is holding a snowboard with a child on it while the model predicts that she is holding a ski ($p_7$). Also, the impressive tricks on a snowboard were guessed simply as “ridding down a hill” which is not completely erroneous but still inaccurate ($p_8$). Overall, the model makes reasonable mistakes except for proposals $p_3$ and $p_4$. Finally, the generated captions provide more general description of a scene compared to the ground truth that is detailed and specific which could be a subject for future investigation. Supplementary Material ::: Details on Feature Extraction Before training, we pre-calculate the features for both audio and visual modalities. In particular, the audio features were extracted using VGGish BIBREF47 which was trained on AudioSet BIBREF55. The input to the VGGish model is a $96\times 64$ log mel-scaled spectrogram extracted for non-overlapping $0.96$ seconds segments. The log mel-scaled spectrogram is obtained by applying Short-Time Fourier Transform on a 16 kHz mono audio track using a periodic Hann window with 25 ms length with 10 ms overlap. The output is a 128-d feature vector after an activation function and extracted before a classification layer. Therefore, the input to MDVC is a matrix with dimension $T_j^a \times 128$ where $T_j^a$ is the number of features proposal $p_j$ consists of. The visual features were extracted using I3D BIBREF46 network which inputs a set of 24 RGB and optical flow frames extracted at 25 fps. The optical flow is extracted with PWC-Net BIBREF58. First, each frame is resized such that the shortest side is 256 pixels. Then, the center region is cropped to obtain $224\times 224$ frames. Both RGB and flow stacks are passed through the corresponding branch of I3D. The output of each branch are summed together producing 1024-d features for each stack of 24 frames. Hence, the resulting matrix has the shape: $T_j^v\times 1024$, where $T_j^v$ is the number of features required for a proposal $p_j$. We use 24 frames for I3D input to temporally match with the input of the audio modality as $\frac{24}{25} = 0.96$. Also note that I3D was pre-trained on the Kinetics dataset with inputs of 64 frames, while we use 24 frames. This is a valid approach since we employ the output of the second to the last layer after activation and average it on the temporal axis. The input for speech modality is represented by temporally allocated text segments in the English language (one could think of them as subtitles). For a proposal $ p_j $, we pick all segments that both: a) end after the proposal starting point, and b) start before the proposal ending point. This provides us with sufficient coverage of what has been said during the proposal segment. Similarly to captions, each word in a speech segment is represented as a number which corresponds to the word's order number in the vocabulary and then passed through the text embedding of size 512. We omit the subtitles that describe the sound like “[Applause]” and “[Music]” as we are only interested in the effect of the speech. Therefore, the speech transformer encoder inputs matrices of shape: $T^s_j\times 512$ where $T^s_j$ is the number of words in corresponding speech for proposal $p_j$. Supplementary Material ::: Implementation Details Since no intermediate layers connecting the features and transformers are used, the dimension of the features transformers $D_T$ corresponds to the size of the extracted features: 512, 128, and 1024 for speech, audio, and visual modalities, respectively. Each feature transformer has one layer ($L$), while the internal layer in the position-wise fully-connected network has $D_P=2048$ units for all modality transformers which was found to perform optimally. We use $H=4$ heads in all multi-headed attention blocks. The captions and speech vocabulary sizes are 10,172 and 23,043, respectively. In all experiments, except for the audio-only model, we use Adam optimizer BIBREF56, a batch containing features for 28 proposals, learning rate $10^{-5}$, $\beta = (0.9, 0.99)$, smoothing parameter $\gamma = 0.7$. In the audio-only model, we apply two-layered transformer architecture with learning rate $10^{-4}$ and $\gamma = 0.2$. To regularize the weights of the model, in every experiment, Dropout BIBREF57 with $p = 0.1$ is applied to the outputs of positional encoding, in every sub-layer before adding a residual, and after the first internal layer of the multi-modal generator. During the experimentation, models were trained for 200 epochs at most and stopped the training early if for 50 consecutive epochs the average METEOR score calculated on ground truth event proposals of both validation sets has not improved. At the end of the training, we employ the best model to estimate its performance on the learned temporal proposals. Usually the training for the best models culminated by 50th epoch, e. g. the final model (MDVC (S + A + V)) was trained for 30 epochs which took, roughly, 15 hours on one consumer-type GPU (Nvidia GeForce RTX 2080 Ti). The code for training heavily relies on PyTorch framework and will be released upon publication. Supplementary Material ::: Comparison with Other Methods In Table TABREF37, we present a comparison with another body of methods BIBREF8, BIBREF9 which were not included in the main comparison as they were using Reinforcement Learning (RL) approach to directly optimize the non-differentiable metric (METEOR). We believe that our method could also benefit from these as the ablation studies BIBREF8, BIBREF9 show significant gains obtained by applying them. As it was anticipated, in general, methods which employ reinforcement learning perform better in terms of METEOR. Interestingly, our model still outperforms BIBREF8 which uses RL in the captioning module.
YouTube ASR system
3a6559dc6eba7f5abddf3ac27376ba0b9643a908
3a6559dc6eba7f5abddf3ac27376ba0b9643a908_0
Q: What is the state of the art? Text: Introduction The substantial amount of freely available video material has brought up the need for automatic methods to summarize and compactly represent the essential content. One approach would be to produce a short video skim containing the most important video segments as proposed in the video summarization task BIBREF0. Alternatively, the video content could be described using natural language sentences. Such an approach can lead to a very compact and intuitive representation and is typically referred to as video captioning in the literature BIBREF1. However, producing a single description for an entire video might be impractical for long unconstrained footage. Instead, dense video captioning BIBREF2 aims, first, at temporally localizing events and, then, at producing natural language description for each of them. Fig. FIGREF1 illustrates dense video captions for an example video sequence. Most recent works in dense video captioning formulate the captioning problem as a machine translation task, where the input is a set of features extracted from the video stream and the output is a natural language sentence. Thus, the captioning methods can be leveraged by recent developments in machine translation field, such as Transformer model BIBREF3. The main idea in the transformer is to utilise self-attention mechanism to model long-term dependencies in a sequence. We follow the recent work BIBREF4 and adopt the transformer architecture in our dense video captioning model. The vast majority of previous works are generating captions purely based on visual information BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, almost all videos include an audio track, which could provide vital cues for video understanding. In particular, what is being said by people in the video, might make a crucial difference to the content description. For instance, in a scene when someone knocks the door from an opposite side, we only see the door but the audio helps us to understand that somebody is behind it and wants to enter. Therefore, it is impossible for a model to make a useful caption for it. Also, other types of videos as instruction videos, sport videos, or video lectures could be challenging for a captioning model. In contrast, we build our model to utilize video frames, raw audio signal, and the speech content in the caption generation process. To this end, we deploy automatic speech recognition (ASR) system BIBREF11 to extract time-aligned captions of what is being said (similar to subtitles) and employ it alongside with video and audio representations in the transformer model. The proposed model is assessed using the challenging ActivityNet Captions BIBREF2 benchmark dataset, where we obtain competitive results to the current state-of-the-art. The subsequent ablation studies indicate a substantial contribution from audio and speech signals. Moreover, we retrieve and perform breakdown analysis by utilizing previously unused video category tags provided with the original YouTube videos BIBREF12. The program code of our model and the evaluation approach will be made publicly available. Related Work ::: Video Captioning Early works in video captioning applied rule-based models BIBREF13, BIBREF14, BIBREF15, where the idea was to identify a set of video objects and use them to fill predefined templates to generate a sentence. Later, the need for sentence templates was omitted by casting the captioning problem as a machine translation task BIBREF16. Following the success of neural models in translation systems BIBREF17, similar methods became widely popular in video captioning BIBREF18, BIBREF19, BIBREF20, BIBREF1, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. The rationale behind this approach is to train two Recurrent Neural Networks (RNNs) in an encoder-decoder fashion. Specifically, an encoder inputs a set of video features, accumulates its hidden state, which is passed to a decoder for producing a caption. To further improve the performance of the captioning model, several methods have been proposed, including shared memory between visual and textual domains BIBREF26, BIBREF27, spatial and temporal attention BIBREF28, reinforcement learning BIBREF29, semantic tags BIBREF30, BIBREF31, other modalities BIBREF32, BIBREF33, BIBREF34, BIBREF35, and by producing a paragraph instead of one sentence BIBREF36, BIBREF1. Related Work ::: Dense Video Captioning Inspired by the idea of the dense image captioning task BIBREF37, Krishna BIBREF2 introduced a problem of dense video captioning and released a new dataset called ActivityNet Captions which leveraged the research in the field BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF38, BIBREF10. In particular, BIBREF5 adopted the idea of the context-awareness BIBREF2 and generalized the temporal event proposal module to utilize both past and future contexts as well as an attentive fusion to differentiate captions from highly overlapping events. Meanwhile, the concept of Single Shot Detector (SSD) BIBREF39 was also used to generate event proposals and reward maximization for better captioning in BIBREF6. In order to mitigate the intrinsic difficulties of RNNs to model long-term dependencies in a sequence, Zhou BIBREF4 tailored the recent idea of Transformer BIBREF3 for dense video captioning. In BIBREF7 the authors noticed that the captioning may benefit from interactions between objects in a video and developed recurrent higher-order interaction module to model these interactions. Xiong BIBREF8 noticed that many previous models produced redundant captions, and proposed to generate captions in a progressive manner, conditioned on the previous caption while applying paragraph- and sentence-level rewards. Similarly, a “bird-view” correction and two-level reward maximization for a more coherent story-telling have been employed in BIBREF9. Since the human annotation of a video with temporal boundaries and captions for each of them can be laborious, several attempts have been made to address this issue BIBREF40, BIBREF41. Specifically, BIBREF40 employed the idea of cycle-consistency to translate a set of captions to a set of temporal events without any paired annotation, while BIBREF41 automatically-collected dataset of an unparalleled-scale exploiting the structure of instructional videos. The most similar work to our captioning model is BIBREF4 that also utilizes a version of the Transformer BIBREF3 architecture. However, their model is designed solely for visual features. Instead, we believe that dense video captioning may benefit from information from other modalities. Related Work ::: Multi-modal Dense Video Captioning A few attempts has been made to include additional cues like audio and speech BIBREF38, BIBREF42, BIBREF43 for dense video captioning task. Rahman BIBREF38 utilized the idea of cycle-consistency BIBREF40 to build a model with visual and audio inputs. However, due to weak supervision, the system did not reach high performance. Hessel BIBREF42 and Shi BIBREF43 employ a transformer architecture BIBREF3 to encode both video frames and speech segments to generate captions for instructional (cooking) videos. Yet, the high results on a dataset which is restricted to instructional video appear to be not evidential as the speech and the captions are already very close to each other in such videos BIBREF41. In contrast to the mentioned multi-modal dense video captioning methods: (1) we present the importance of the speech and audio modalities on a domain-free dataset, (2) propose a multi-modal dense video captioning module (MDVC) which can be scaled to any number of modalities. Proposed Framework In this section, we briefly outline the workflow of our method referred to as Multi-modal Dense Video Captioning (MDVC) which is shown in Fig. FIGREF5. The goal of our method is to temporally localize events on a video and to produce a textual description for each of them. To this end, we apply a two-stage approach. Firstly, we obtain the temporal event locations. For this task, we employ the Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5. Bi-SST applies 3D Convolution network (C3D) BIBREF44 to video frames and extracts features that are passed to subsequent bi-directional LSTM BIBREF45 network. The LSTM accumulates visual cues over time and predicts confidence scores for each location to be start/end point of an event. Finally, a set of event proposals (start/end times) is obtained and passed to the second stage for caption generation. Secondly, we generate the captions given a proposal. To produce inputs from audio, visual, and speech modalities, we use Inflated 3D convolutions (I3D) BIBREF46 for visual and VGGish network BIBREF47 for audio modalities. For speech representation as a text, we employ an external ASR system BIBREF11. To represent the text into a numerical form, we use a similar text embedding which is used for caption encoding. The features are, then, fed to individual transformer models along with the words of a caption from the previous time steps. The output of the transformer is passed into a generator which fuses the outputs from all modalities and estimates a probability distribution over the word vocabulary. After sampling the next word, the process is repeated until a special end token is obtained. Fig. FIGREF1 illustrates an example modality and the corresponding event captions. Proposed Framework ::: Temporal Event Localization Module An event localization module is dedicated to generating a set of temporal regions which might contain an event. To achieve this, we employ pre-trained Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5 as it has is been shown to reach good performance in the proposal generation task. Bi-SST inputs a sequence of $T$ RGB frames from a video $V = (x_1, x_2, \dots , x_F)$ and extracts a set of 4096-d features $V^{\prime } = (f_1, f_2, \dots , f_T)$ by applying a 3D Convolution network (C3D) on non-overlapping segments of size 16 with a stride of 64 frames. To reduce the feature dimension, only 500 principal components were selected using PCA. To account for the video context, events are proposed during forward and backward passes on a video sequence $V^{\prime }$, and, then, the resulting scores are fused together to obtain the final proposal set. Specifically, during the forward pass, LSTM is used to accumulate the visual clues from the “past” context at each position $t$ which is treated as an ending point and produce confidence scores for each proposal. Afterwards, a similar procedure is performed during the backward pass where the features $V^{\prime }$ are used in a reversed order. This empowers the model to have a sense of the “future” context in a video. In contrast to the forward pass, each position is treated as a starting point of the proposal. Finally, the confidence scores from both passes are fused by multiplication of corresponding scores for each proposal at each time step, and, then, filtered according to a predefined threshold. Finally, we obtain a set of $N_V$ event proposals for caption generation $P_V=\lbrace p_j = (\text{start}_j, \text{end}_j, \text{score}_j)\rbrace _{j=1}^{N_V}$. Proposed Framework ::: Captioning Module In this section we explain the captioning based for an example modality, namely, visual. Given a video $V$ and a set of proposals $P_V$ from the event localization module, the task of the captioning module is to provide a caption for each proposal in $P_V$. In order to extract features from a video $V$, we employ I3D network BIBREF46 pre-trained on the Kinetics dataset which produces 1024-d features. The gap between the extracted features and the generated captions is filled with Transformer BIBREF3 architecture which was proven to effectively encode and decode the information in a sequence-to-sequence setting. Proposed Framework ::: Captioning Module ::: Feature Transformer As shown in Fig. FIGREF6, Feature Transformer architecture mainly consists of three blocks: an encoder, decoder, and generator. The encoder inputs a set of extracted features $ \mathbf {v}^j = (v_1, v_2, \dots , v_{T_j}) $ temporally corresponding to a proposal $p_j$ from $P_V$ and maps it to a sequence of internal representations $ \mathbf {z}^j = (z_1, z_2, \dots , z_{T_j}) $. The decoder is conditioned on the output of the encoder $\mathbf {z}^j$ and the embedding $ \mathbf {e}^j_{\leqslant t} = (e_1, e_2, \dots , e_t)$ of the words in a caption $ \mathbf {w}^j_{\leqslant t} = (w_1, w_2, \dots , w_t) $. It produces the representation $ \mathbf {g}^j_{\leqslant t} = (g_1, g_2, \dots , g_t) $ which, in turn, is used by the generator to model a distribution over a vocabulary for the next word $ p(w_{t+1}|\mathbf {g}^j_{\leqslant t}) $. The next word is selected greedily by obtaining the word with the highest probability until a special ending token is sampled. The captioning is initialized with a starting token. Both are added to the vocabulary. Before providing an overview of the encoder, decoder, and generator, we presenting the notion of multi-headed attention that acts as an essential part of the decoder and encoder blocks. The concept of the multi-head attention, in turn, heavily relies on dot-product attention which we describe next. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Dot-product Attention The idea of the multi-headed attention rests on the scaled dot-product attention which calculates the weighted sum of values. The weights are obtained by applying the softmax function on the dot-product of each pair of rows of queries and keys scaled by $\frac{1}{\sqrt{D_k}}$. The scaling is done to prevent the softmax function from being in the small gradient regions BIBREF3. Formally the scaled dot-product attention can be represented as follows where $Q, K, V $ are queries, keys, and values, respectively. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Multi-headed Attention The multi-headed attention block is used once in each encoder layer and twice in each decoder layer. The block consists of $H$ heads that allows to cooperatively account for information from several representations sub-spaces at every position while preserving the same computation complexity BIBREF3. In a transformer with dimension $D_T$, each head is defined in the following way where $q, k, v$ are matrices which have $D_T$ columns and the number of rows depending on the position of the multi-headed block, yet with the same number of rows for $k$ and $v$ to make the calculation in (DISPLAY_FORM11) to be feasible. The $W^{q}_h, W^{k}_h, W^{v}_h \in \mathbb {R}^{D_T \times D_k}$ are trainable projection matrices that map $q, k , v$ from $D_T$ into $D_k= \frac{D_T}{H}$, asserting $D_T$ is a multiple of $H$. The multi-head attention, in turn, is the concatenation of all attention heads mapped back into $D_T$ by trainable parameter matrix $W^o \in \mathbb {R}^{D_k \cdot H \times D_T}$: Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Encoder The encoder consists of $ L $ layers. The first layer inputs a set of features $ \mathbf {v}^j $ and outputs an internal representation $ \mathbf {z}_1^j \in \mathbb {R}^{T_j \times D_T} $ while each of the next layers treats the output of a previous layer as its input. Each encoder layer $l$ consist of two sub-layers: multi-headed attention and position-wise fully connected network which are explained later in this section. The input to both sub-layers are normalized using layer normalization BIBREF48, each sub-layer is surrounded by a residual connection BIBREF49 (see Fig. FIGREF6). Formally, the $l$-th encoder layer has the following definition where $\text{FCN}$ is the position-wise fully connected network. Note, the multi-headed attention has identical queries, keys, and values ($ \overline{\mathbf {z}}_l^j $). Such multi-headed attention block is also referred to as self-multi-headed attention. It enables an encoder layer $l$ to account for the information from all states from the previous layer $ \mathbf {z}_{l-1}^j$. This property contrasts with the idea of RNN which accumulates only the information from the past positions. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Decoder Similarly to the encoder, the decoder has $ L $ layers. At a position $t$, the decoder inputs a set of embedded words $\mathbf {e}^j_{\leqslant t}$ with the output of the encoder $\mathbf {z}^j$ and sends the output to the next layer which is conditioned on this output and, again, the encoder output $\mathbf {z}^j$. Eventually, the decoder producing its internal representation $\mathbf {g}_{\leqslant t}^j \in \mathbb {R}^{t \times D_T}$. The decoder block is similar to the encoder but has an additional sub-layer that applies multi-headed attention on the encoder output and the output of its previous sub-layer. The decoder employs the layer normalization and residual connections at all three sub-layers in the same fashion as the encoder. Specifically, the $l$-th decoder layer has the following form: where $ \mathbf {z}^j $ is the encoder output. Note, similarly to the encoder, (DISPLAY_FORM18) is a self-multi-headed attention function while the second multi-headed attention block attends on both the encoder and decoder and is also referred to as encoder-decoder attention. This block enables each layer of the decoder to attend all state of the encoder's output $ \mathbf {z}^j$. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Position-wise Fully-Connected Network The fully connected network is used in each layer of the encoder and the decoder. It is a simple two-layer neural network that inputs $x$ with the output of the multi-head attention block, and, then, projects each row (or position) of the input $x$ from $D_T$ space onto $D_P$, $(D_P > D_T)$ and back, formally: where $W_1 \in \mathbb {R}^{D_T \times D_P}$, $W_2 \in \mathbb {R}^{D_P \times D_T}$, and biases $b_1, b_2$ are trainable parameters, $\text{ReLU}$ is a rectified linear unit. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Generator At the position $t$, the generator consumes the output of the decoder $\mathbf {g}^j_{\leqslant t}$ and produces a distribution over the vocabulary of words $p(w_{t+1}| \mathbf {g}^j_{\leqslant t})$. To obtain the distribution, the generator applies the softmax function of the output of a fully connected layer with a weight matrix $W_G \in \mathbb {R}^{D_T \times D_V}$ where $D_V$ is a vocabulary size. The word with the highest probability is selected as the next one. Proposed Framework ::: Captioning Module ::: Feature Transformer ::: Input Embedding and Positional Encoding Since the representation of textual data is usually sparse due to a large vocabulary, the dimension of the input of a neural language model is reduced with an embedding into a dimension of a different size, namely $D_T$. Also, following BIBREF3, we multiply the embedding weights by $\sqrt{D_T}$. The position encoding is required to allow the transformer to have a sense of the order in an input sequence. We adopt the approach proposed for a transformer architecture, i. e. we add the output of the combination of sine and cosine functions to the embedded input sequence BIBREF3. Proposed Framework ::: Captioning Module ::: Multi-modal Dense Video Captioning In this section, we present the multi-modal dense video captioning module which, utilises visual, audio, and speech modalities. See Fig. FIGREF6 for a schematic representation of the module. For the sake of speech representation $\mathbf {s}^j = (s_1, s_2, \dots , s_{T_j^s})$, we use the text embedding of size 512-d that is similar to the one which is employed in the embedding of a caption $\mathbf {w}^j_{\leqslant t}$. To account for the audio information, given a proposal $p_j$ we extract a set of features $\mathbf {a}_j = (a_1, a_2, \dots , a_{T_j^a})$ applying the 128-d embedding layer of the pre-trained VGGish network BIBREF47 on an audio track. While the visual features $\mathbf {v}^j = (v_1, v_2, \dots v_{T_j^v}) $ are encoded with 1024-d vectors by Inflated 3D (I3D) convolutional network BIBREF46. To fuse the features, we create an encoder and a decoder for each modality with dimensions corresponding to the size of the extracted features. The outputs from all decoders are fused inside of the generator, and the distribution of a next word $w_{t+1}$ is formed. In our experimentation, we found that a simple two-layer fully-connected network applied of a matrix of concatenated features performs the best with the ReLU activation after the first layer and the softmax after the second one. Each layer of the network has a matrix of trainable weights: $W_{F_1} \in \mathbb {R}^{D_F \times D_V}$ and $W_{F_2} \in \mathbb {R}^{D_V \times D_V}$ with $D_F = 512 + 128 + 1024 $ and $D_V$ is a vocabulary size. Proposed Framework ::: Model Training As the training is conducted using mini-batches of size 28, the features in one modality must be of the same length so the features could be stacked into a tensor. In this regard, we pad the features and the embedded captions to match the size of the longest sample. The model is trained by optimizing the Kullback–Leibler divergence loss which measures the “distance” between the ground truth and predicted distributions and averages the values for all words in a batch ignoring the masked tokens. Since many words in the English language may have several synonyms or human annotation may contain mistakes, we undergo the model to be less certain about the predictions and apply Label Smoothing BIBREF50 with the smoothing parameter $\gamma $ on the ground truth labels to mitigate this. In particular, the ground truth distribution over the vocabulary of size $D_V$, which is usually represented as one-hot encoding vector, the identity is replaced with probability $1-\gamma $ while the rest of the values are filled with $\frac{\gamma }{D_V-1}$. During training, we exploit the teacher forcing technique which uses the ground truth sequence up to position $t$ as the input to predict the next word instead of using the sequence of predictions. As we input the whole ground truth sequence at once and predicting the next words at each position, we need to prevent the transformer from peeping for the information from the next positions as it attends to all positions of the input. To mitigate this, we apply masking inside of the self-multi-headed attention block in the decoder for each position higher than $t-1$, following BIBREF3. The details on the feature extraction and other implementation details are available in the supplementary materials. Experiments ::: Dataset We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. Each video, on average, contains 3.65 temporally localized captions, around 13.65 words each, and two minutes long. In addition, each video in the validation set is annotated twice by different annotators. We report all results using the validation set (no ground truth is provided for the test set). The dataset itself is distributed as a collection of links to YouTube videos, some of which are no longer available. Authors provide pre-computed C3D features and frames at 5fps, but these are not suitable for our experiments. At the time of writing, we found 9,167 (out of 10,009) training and 4,483 (out of 4,917) validation videos which is, roughly, 91 % of the dataset. Out of these 2,798 training and 1,374 validation videos (approx. 28 %) contain at least one speech segment. The speech content was obtained from the closed captions (CC) provided by the YouTube ASR system which can be though as subtitles. Experiments ::: Metrics We are evaluating the performance of our model using BLEU@N BIBREF51 and METEOR BIBREF52. We regard the METEOR as our primary metric as it has been shown to be highly correlated with human judgement in a situation with a limited number of references (only one, in our case). We employ the official evaluation script provided in BIBREF53. Thus, the metrics are calculated if a proposed event and a ground truth location of a caption overlaps more than a specified temporal Intersection over Union (tIoU) and zero otherwise. All metric values are averaged for every video, and, then, for every threshold tIoU in $[0.3, 0.5, 0.7, 0.9]$. On the validation, we average the resulting scores for both validation sets. For the learned proposal setting, we report our results on at most 100 proposals per video. Notably, up to early 2017, the evaluation code had an issue which previously overestimated the performance of the algorithms in the learned proposal setting BIBREF9. Therefore, we report the results using the new evaluation code. Experiments ::: Comparison with Baseline Methods We compare our method with five related approaches, namely Krishna BIBREF2, Wang BIBREF5, Zhou BIBREF4, Li BIBREF6, and Rahman BIBREF38. We take the performance values from the original papers, except for BIBREF6, and BIBREF4, which are taken from BIBREF9 due to the evaluation issue (see Sec. SECREF27). The lack of access to the full ActivityNet Captions dataset makes strictly fair comparison difficult as we have less training and validation videos. Nevertheless, we present our results in two set-ups: 1) full validation set with random input features for missing entries, and 2) videos with all three modalities present (video, audio, and speech). The first one is chosen to indicate the lower bound of our performance with the full dataset. Whereas, the second one (referred to as “no missings”) concentrates on the multi-modal setup, which is the main contribution of our work. The obtained results are presented in Tab. TABREF25. Our method (MDVC) achieves comparable or better performance, even though we have access to smaller training set and 9 % of the validation videos are missing (replaced with random input features). Furthermore, if all three modalities are present, our method outperforms all baseline approaches in the case of both GT and learned proposals. Notably, we outperform BIBREF4 which is also based on the transformer architecture and account for the optical flow. This shows the superior performance of our captioning module which, yet, trained on the smaller amount of data. Experiments ::: Ablation Studies In this section, we perform an ablation analysis highlighting the effect of different design choices of our method. For all experiments, we use the full unfiltered ActivityNet Captions validation set with ground truth event proposals. Firstly, we assess the selection of the model architecture. To this end, we implemented a version of our method where the transformer was replaced by Bidirectional Recurrent Neural Network with Gated Recurrent Units with attention (Bi-GRU), proposed in BIBREF54. To distil the effect of the change in architecture, the results are shown for visual-only models. Both Bi-GRU and the transformer input I3D features extracted from 64 RGB and optical flow frames (the final model inputs 24 frames). Finally, we set a lower bound for the feature performance by training a transformer model with random video features. Tab. TABREF32 shows the comparison. To conclude, we observe that the feature transformer-based model is not only lighter but also achieves better performance in dense video captioning task. Moreover, both method clearly surpasses the random baseline. Secondly, we evaluate the contribution of different modalities in our framework. Tab. TABREF33 contains the results for different modality configurations as well as for two feature fusion approaches. Specifically, averaging of the output probabilities and concatenation of the outputs of all modalities and applying two fully connected (FC) layers on top. We observe that audio-only model has the worst performance, followed by the visual only model, and the combination of these two. Moreover, the concatenation and FC layers result in better performance than averaging. To further assess if the performance gain is due to the additional modalities or to the extra capacity in the FC layers, we trained a visual-only model with two additional FC layers. The results indicate that such configuration performs worse than any bi-modal setup. Overall, we conclude that the final model with all three modalities performs best among all tested set-ups, which highlights the importance of multi-modal setting in dense video captioning task. Fig. FIGREF29 shows a qualitative comparison between different models in our ablation study. Moreover, we provide the corresponding captions from the best performing baseline method (Zhuo BIBREF4). We noticed the following pattern: the audio-modality produces coherent sentences and captures the concepts of speaking in the video. However, there are clear mistakes in the caption content. In contrast, the model with all three modalities manages to capture the man who speaks to the camera which is also present in the ground truth. Both visual-only MDVC and Zhuo struggle to describe the audio details. Finally, to test whether our model improves the performance in general rather than in a specific video category, we report the comparison of the different versions of MDVC per category. To this end, we retrieve the category labels from the YouTubeAPI BIBREF12 (US region) for every available ActivityNet Captions validation video. These labels are given by the user when uploading the video and roughly represent the video content type. The comparison is shown in Fig. FIGREF31. The results imply a consistent gain in performance within each category except for categories: “Film & Animation” and “Travel & Events” which might be explained by the lack of correspondence between visual and audio tracks. Specifically, the video might be accompanied by music, e. g. promotion of a resort. Also, “Film & Animation” contains cartoon-like movies which might have a realistic soundtrack while the visual track is goofy. Conclusion The use of different modalities in computer vision is still an underrepresented topic and, we believe, deserves more attention. In this work, we introduced a multi-modal dense video captioning module (MDVC) and shown the importance of the audio and speech modalities for dense video captioning task. Specifically, MDVC is based on the transformer architecture which encodes the feature representation of each modality for a specific event proposal and produces a caption using the information from these modalities. The experimentation, conducted employing the ActivityNet Captions dataset, shows the superior performance of a captioning module to the visual-only models in the existing literature. Extensive ablation study verifies this conclusion. We believe that our results firmly indicate that future works in video captioning should utilize a multi-modal input. Supplementary Material The supplementary material consists of four sections. In Section SECREF35, we provide qualitative results of the MDVC on another example video. The details on features extraction and implementation are described in Section SECREF36 and SECREF38. Finally, the comparison with other methods is shown in Section SECREF39. Supplementary Material ::: Qualitative Results (Another Example) In Figure FIGREF34, we provide qualitative analysis of captioning on another video from ActivityNet Captions validation set to emphasize the importance of additional modalities for dense video captioning, namely, speech and audio. We compare the captioning proposed by MDVC (our model) conditioned on different sets of modalities: audio-only (A-only), visual-only (V-only), and including all modalities (S + A + V). Additionally, we provide the results of a captioning model proposed in Zhou BIBREF4 (visual only) which showed the most promising results according to METEOR. More precisely, the video (YouTube video id: EGrXaq213Oc) lasts two minutes and contains 12 human annotations. The video is an advertisement for snowboarding lessons for children. It shows examples of children successfully riding a snowboard on a hill and supportive adults that help them to learn. A lady narrates the video and appears in the shot a couple of times. Generally, we may observe that MDVC with the audio modality alone (A-only) mostly describes that a woman is speaking which is correct according to the audio content yet the details about snowboarding and children are missing. This is expectedly challenging for the network as no related sound effects to snowboarding are present. In the meantime, the visual-only MDVC grasps the content well, however, misses important details like the gender of the speaker. While the multi-modal model MDVC borrows the advantages of both which results in more accurate captions. The benefits of several modalities stand out in captions for $p_2$ and $p_{10}$ segments. Note that despite the appearance of the lady in the shot during $p_{10}$, the ground truth caption misses it yet our model manages to grasp it. Yet, some limitations of the final model could be noticed as well. In particular, the content of some proposals is dissimilar to the generated captions, e. g. the color of the jacket ($p_4$, $p_5$), or when a lady is holding a snowboard with a child on it while the model predicts that she is holding a ski ($p_7$). Also, the impressive tricks on a snowboard were guessed simply as “ridding down a hill” which is not completely erroneous but still inaccurate ($p_8$). Overall, the model makes reasonable mistakes except for proposals $p_3$ and $p_4$. Finally, the generated captions provide more general description of a scene compared to the ground truth that is detailed and specific which could be a subject for future investigation. Supplementary Material ::: Details on Feature Extraction Before training, we pre-calculate the features for both audio and visual modalities. In particular, the audio features were extracted using VGGish BIBREF47 which was trained on AudioSet BIBREF55. The input to the VGGish model is a $96\times 64$ log mel-scaled spectrogram extracted for non-overlapping $0.96$ seconds segments. The log mel-scaled spectrogram is obtained by applying Short-Time Fourier Transform on a 16 kHz mono audio track using a periodic Hann window with 25 ms length with 10 ms overlap. The output is a 128-d feature vector after an activation function and extracted before a classification layer. Therefore, the input to MDVC is a matrix with dimension $T_j^a \times 128$ where $T_j^a$ is the number of features proposal $p_j$ consists of. The visual features were extracted using I3D BIBREF46 network which inputs a set of 24 RGB and optical flow frames extracted at 25 fps. The optical flow is extracted with PWC-Net BIBREF58. First, each frame is resized such that the shortest side is 256 pixels. Then, the center region is cropped to obtain $224\times 224$ frames. Both RGB and flow stacks are passed through the corresponding branch of I3D. The output of each branch are summed together producing 1024-d features for each stack of 24 frames. Hence, the resulting matrix has the shape: $T_j^v\times 1024$, where $T_j^v$ is the number of features required for a proposal $p_j$. We use 24 frames for I3D input to temporally match with the input of the audio modality as $\frac{24}{25} = 0.96$. Also note that I3D was pre-trained on the Kinetics dataset with inputs of 64 frames, while we use 24 frames. This is a valid approach since we employ the output of the second to the last layer after activation and average it on the temporal axis. The input for speech modality is represented by temporally allocated text segments in the English language (one could think of them as subtitles). For a proposal $ p_j $, we pick all segments that both: a) end after the proposal starting point, and b) start before the proposal ending point. This provides us with sufficient coverage of what has been said during the proposal segment. Similarly to captions, each word in a speech segment is represented as a number which corresponds to the word's order number in the vocabulary and then passed through the text embedding of size 512. We omit the subtitles that describe the sound like “[Applause]” and “[Music]” as we are only interested in the effect of the speech. Therefore, the speech transformer encoder inputs matrices of shape: $T^s_j\times 512$ where $T^s_j$ is the number of words in corresponding speech for proposal $p_j$. Supplementary Material ::: Implementation Details Since no intermediate layers connecting the features and transformers are used, the dimension of the features transformers $D_T$ corresponds to the size of the extracted features: 512, 128, and 1024 for speech, audio, and visual modalities, respectively. Each feature transformer has one layer ($L$), while the internal layer in the position-wise fully-connected network has $D_P=2048$ units for all modality transformers which was found to perform optimally. We use $H=4$ heads in all multi-headed attention blocks. The captions and speech vocabulary sizes are 10,172 and 23,043, respectively. In all experiments, except for the audio-only model, we use Adam optimizer BIBREF56, a batch containing features for 28 proposals, learning rate $10^{-5}$, $\beta = (0.9, 0.99)$, smoothing parameter $\gamma = 0.7$. In the audio-only model, we apply two-layered transformer architecture with learning rate $10^{-4}$ and $\gamma = 0.2$. To regularize the weights of the model, in every experiment, Dropout BIBREF57 with $p = 0.1$ is applied to the outputs of positional encoding, in every sub-layer before adding a residual, and after the first internal layer of the multi-modal generator. During the experimentation, models were trained for 200 epochs at most and stopped the training early if for 50 consecutive epochs the average METEOR score calculated on ground truth event proposals of both validation sets has not improved. At the end of the training, we employ the best model to estimate its performance on the learned temporal proposals. Usually the training for the best models culminated by 50th epoch, e. g. the final model (MDVC (S + A + V)) was trained for 30 epochs which took, roughly, 15 hours on one consumer-type GPU (Nvidia GeForce RTX 2080 Ti). The code for training heavily relies on PyTorch framework and will be released upon publication. Supplementary Material ::: Comparison with Other Methods In Table TABREF37, we present a comparison with another body of methods BIBREF8, BIBREF9 which were not included in the main comparison as they were using Reinforcement Learning (RL) approach to directly optimize the non-differentiable metric (METEOR). We believe that our method could also benefit from these as the ablation studies BIBREF8, BIBREF9 show significant gains obtained by applying them. As it was anticipated, in general, methods which employ reinforcement learning perform better in terms of METEOR. Interestingly, our model still outperforms BIBREF8 which uses RL in the captioning module.
Unanswerable
5cd5864077e4074bed01e3a611b747a2180088a0
5cd5864077e4074bed01e3a611b747a2180088a0_0
Q: How big are datasets used in experiments? Text: Introduction Ultrasound technology is a widespread technology in speech research for studying tongue movement and speech articulation BIBREF0 due to its attractive characteristics, such as imaging at a reasonably rapid frame rate, which empowers researchers to envision subtle and swift gestures of the tongue in real-time. Besides, ultrasound technology is portable, relatively affordable, and clinically safe and non-invasive BIBREF1. The mid-sagittal view is regularly adapted in ultrasound data as it displays relative backness, height, and the slope of various areas of the tongue. Quantitative analysis of tongue motion needs the tongue contour to be extracted, tracked, and visualized. Manual frame-by-frame tongue contour extraction is a cumbersome, subjective, and error-prone task. Moreover, it is not a feasible solution for real-time applications. In conventional techniques, for extracting ultrasound tongue contours, a discrete set of vertices are first annotated near the lower part of the tongue dorsum defining initial deformable tongue contour BIBREF2. Then, through an iterative minimization process using features of the image, the annotated contour is regulated toward the tongue dorsum region. For instance, in active contour models technique (e.g., EdgeTrak software) BIBREF3, BIBREF4, two internal and external energy functions are minimized over the image gradient. The requirement of feature extraction for each image and accurate initialization are two main drawbacks for those classical techniques. Another alternative scenario is to use semi-supervised machine learning models for automatic segmentation of tongue contour regions. Then, tongue contours are extracted automatically using post-processing stages. Semi-supervised machine learning-based methods BIBREF5 are first utilized for ultrasound tongue contour segmentation in an study by BIBREF6, BIBREF7 while deep learning models emerge in this field through studies by BIBREF0, BIBREF8. They fine-tuned one pre-trained decoder part of a Deep Belief Network (DBN) model to infer tongue contours from new instances. End-to-end fashion supervised deep learning techniques, outperformed previous techniques in recent years. For example, U-net BIBREF9 has been used for automatic ultrasound tongue extraction BIBREF10, BIBREF11. After successful results of deep learning methods, the focus of advanced techniques for tongue contour extraction is more on generalization and real-time performance BIBREF12, BIBREF13, BIBREF14. Although deep learning methods have been utilized successfully in many studies, manual annotation of ultrasound tongue databases is still cumbersome, and the performance of supervised methods mostly depends on the accuracy of the annotated database as well as the number of available samples. Available databases in this field are annotated by linguistics experts for many years employing landmark points on the tongue contours. In this work, we proposed a new direction for the problem of ultrasound tongue contour extraction using a deep learning technique where instead of tracking the tongue surface, landmarks on the tongue are tracked. In this way, researchers can use previously available linguistics ultrasound tongue databases. Moreover, the whole process of tongue contour extraction is performed in one step, where it increases the performance speed without comprising accuracy or generalization ability of the previous techniques. Methodology Similar to facial landmark detection methods BIBREF15, we considered the problem of tongue contour extraction as a simple landmark detection and tracking. For this reason, we first developed a customized annotator software that can extract equally separated and randomized markers from segmented tongue contours in different databases. Meanwhile, the same software could fit B-spline curves on the extracted markers to revert the process for evaluation purposes. To track landmarks on the tongue surface, we designed a light-version deep convolutional neural network named TongueNet. Figure FIGREF1 shows TongueNet architecture. In each layer, convolutional operations followed by ReLU activations as a non-linearity function as well as Batch normalization layers to improve the regularization, convergence, and accuracy. For the sake of better generalization ability of the model, in the last two layers, fully connected layers are equipped with Drop-out layers of $50\%$. To find the optimum number of required points in the output layer, we used the number of points ($\#$ in Figure FIGREF1) from 5 to 100 (see Figure FIGREF2 for samples of this experiment with 5, 10, 15, 20, 25, and 30 points as the output). Experimental Results and Discussion There is usually a trade-off between the number of training samples and the number of trainable parameters in a deep network model BIBREF16. In general, more data we have, better result are generated by supervised deep learning methods. Data augmentation helps to increase the number of training data, but a bigger dataset needs a better and most likely bigger network architecture in terms of generalization. Otherwise, the model might over-fitted or under-fitted on training data. Using our annotation software, we automatically extracted landmarks of 2000 images from the UOttawa database BIBREF14 had been annotated for image segmentation tasks. The database was randomly divided into three sets: 90$\%$ training, 5$\%$ validation, and 5$\%$ testing datasets. For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model. During the training process of TongueNet, we employed an online data augmentation, including rotation (-25 to 25 degrees), translation (-30 to 30 points in two directions), scaling (from 0.5 to 2 times), horizontal flipping, and combination of these transformations, and annotation point locations are also was transformed, correspondingly. From our extensive random search hyper-parameter tuning, learning rate, the number of iterations, mini-batch sizes, the number of epochs was determined as 0.0005, 1000, 30, 10, respectively. We deployed our experiments using Keras with TensorFlow as the backend on a Windows PC with Core i7, 4.2 GHz speed using one NVIDIA 1080 GPU unit, and 32 GB of RAM. Adam optimization with fixed momentum values of 0.9, was utilized for training. We trained and tested TongueNet with a different number of points as the output size. We first evaluated the scenario of equally spaced landmarks on the tongue surface. In this method, we captured all the points in equal distances respect to their neighbors. Our results from selecting five points to the number of pixels in the horizontal axis of the image (image width) revealed that the results of equally spaced selected points are not significant. Figure FIGREF3 shows randomly selected frames from real-time tracking of TongueNet using ultrasound tongue landmarks. As can be seen from the figure, automatically selection of points as annotation, which is used in many previous studies (see BIBREF17 for an example), can not provide accurate results. In a separate study, we extract annotation points from the same database using a randomly spaced selection method on tongue contours. We add restrictions for points that they should be at a minimum distance from each other as well as omitting outliers from the database. We saw that the optimum number of points was ten points for our experiments. Figure FIGREF4 shows some randomly selected results from training TongueNet on a randomly selected point database. From the figure, better accuracy of the TongueNet can be seen qualitatively. Note that we didn't apply any image enhancement or cropping for databases. To show the performance ability of TongueNet quantitatively, we first fitted B-spline curves using the OpenCV library on the instances of TongueNet. Then, we compared the value of the mean sum of distance (MSD) BIBREF8 for TongueNet, sUNET BIBREF11, UNET BIBREF18, BowNet BIBREF13, and IrisNet BIBREF14 deep learning models. From table TABREF6, it can be seen that TongueNet could reach to results similar to the state of the art deep learning models in this field. Note that there are some approximation errors for the curve-fitting procedure of the TongueNet and skeletonization process for extracting tongue contours from segmentation results of other deep learning models. We even tested TongueNet on a new database from UBC (the same database used in BIBREF14) to evaluate the generalization ability of the landmark tracking technique. Although TongueNet has not trained on that database, it could predict favorable instances for video frames with different data distributions. This shows the capability of TongueNet for managing of the over-fitting. From Table. TABREF6, the difference of MSD values is not significant between models, while IrisNet could find better MSD value. However, in terms of speed, TongueNet outperforms other deep learning models while post-processing time is not considered for them. We likewise tested TongueNet on a new database from the University of British Columbia (the same database used in BIBREF14) to evaluate the generalization ability of the landmark tracking technique. Although TongueNet has not trained on that database, it could predict favorable instances for that novel video frames with different data distributions. This shows the capability of TongueNet for managing of the over-fitting. From Table. TABREF6, although the difference of MSD values is not significant between models, IrisNet could find better MSD value. However, in terms of speed performance, TongueNet outperforms other deep learning models while post-processing time was not included for calculation of frame rates. Conclusion and Discussion In this paper, we presented TongueNet, a simple deep learning architecture, with a novel training approach for the problem of Tongue contour extraction and tracking. Unlike similar studies, we used several points to track on the tongue surface instead of the whole tongue contour region. In recent tongue contour tracking, for quantitative studies, a two-phase procedure is performed, including image segmentation and post-processing for extraction of tongue contour that this increase computational costs. Furthermore, available previously annotated tongue databases in articulation studies could not be utilized for deep learning techniques until now. Using TongueNet, we provided a new tool for this literature, while old databases now can be used for developing a better tongue contour tracking technique. All landmark locations in this study are annotated automatically with two different approaches, and we experienced that data augmentation has a significant rule in the accuracy of TongueNet. From our experimental results, we can anticipate that if an extensive manually annotated database, might be a combination of several databases, is employed for training of a deep learning model such as TongueNet, the accuracy of the model would be boosted considerably. The materials of this study will help researchers in different fields such as linguistics to study tongue gestures in real-time easier, accessible, and with higher accuracy than previous methods. The current infant TongueNet technique needs to be developed, trained, and extended as a fast, accurate, real-time, automatic method applicable for available ultrasound tongue databases.
2000 images
d664054c8d1f8e84169d4ab790f2754274353685
d664054c8d1f8e84169d4ab790f2754274353685_0
Q: What previously annotated databases are available? Text: Introduction Ultrasound technology is a widespread technology in speech research for studying tongue movement and speech articulation BIBREF0 due to its attractive characteristics, such as imaging at a reasonably rapid frame rate, which empowers researchers to envision subtle and swift gestures of the tongue in real-time. Besides, ultrasound technology is portable, relatively affordable, and clinically safe and non-invasive BIBREF1. The mid-sagittal view is regularly adapted in ultrasound data as it displays relative backness, height, and the slope of various areas of the tongue. Quantitative analysis of tongue motion needs the tongue contour to be extracted, tracked, and visualized. Manual frame-by-frame tongue contour extraction is a cumbersome, subjective, and error-prone task. Moreover, it is not a feasible solution for real-time applications. In conventional techniques, for extracting ultrasound tongue contours, a discrete set of vertices are first annotated near the lower part of the tongue dorsum defining initial deformable tongue contour BIBREF2. Then, through an iterative minimization process using features of the image, the annotated contour is regulated toward the tongue dorsum region. For instance, in active contour models technique (e.g., EdgeTrak software) BIBREF3, BIBREF4, two internal and external energy functions are minimized over the image gradient. The requirement of feature extraction for each image and accurate initialization are two main drawbacks for those classical techniques. Another alternative scenario is to use semi-supervised machine learning models for automatic segmentation of tongue contour regions. Then, tongue contours are extracted automatically using post-processing stages. Semi-supervised machine learning-based methods BIBREF5 are first utilized for ultrasound tongue contour segmentation in an study by BIBREF6, BIBREF7 while deep learning models emerge in this field through studies by BIBREF0, BIBREF8. They fine-tuned one pre-trained decoder part of a Deep Belief Network (DBN) model to infer tongue contours from new instances. End-to-end fashion supervised deep learning techniques, outperformed previous techniques in recent years. For example, U-net BIBREF9 has been used for automatic ultrasound tongue extraction BIBREF10, BIBREF11. After successful results of deep learning methods, the focus of advanced techniques for tongue contour extraction is more on generalization and real-time performance BIBREF12, BIBREF13, BIBREF14. Although deep learning methods have been utilized successfully in many studies, manual annotation of ultrasound tongue databases is still cumbersome, and the performance of supervised methods mostly depends on the accuracy of the annotated database as well as the number of available samples. Available databases in this field are annotated by linguistics experts for many years employing landmark points on the tongue contours. In this work, we proposed a new direction for the problem of ultrasound tongue contour extraction using a deep learning technique where instead of tracking the tongue surface, landmarks on the tongue are tracked. In this way, researchers can use previously available linguistics ultrasound tongue databases. Moreover, the whole process of tongue contour extraction is performed in one step, where it increases the performance speed without comprising accuracy or generalization ability of the previous techniques. Methodology Similar to facial landmark detection methods BIBREF15, we considered the problem of tongue contour extraction as a simple landmark detection and tracking. For this reason, we first developed a customized annotator software that can extract equally separated and randomized markers from segmented tongue contours in different databases. Meanwhile, the same software could fit B-spline curves on the extracted markers to revert the process for evaluation purposes. To track landmarks on the tongue surface, we designed a light-version deep convolutional neural network named TongueNet. Figure FIGREF1 shows TongueNet architecture. In each layer, convolutional operations followed by ReLU activations as a non-linearity function as well as Batch normalization layers to improve the regularization, convergence, and accuracy. For the sake of better generalization ability of the model, in the last two layers, fully connected layers are equipped with Drop-out layers of $50\%$. To find the optimum number of required points in the output layer, we used the number of points ($\#$ in Figure FIGREF1) from 5 to 100 (see Figure FIGREF2 for samples of this experiment with 5, 10, 15, 20, 25, and 30 points as the output). Experimental Results and Discussion There is usually a trade-off between the number of training samples and the number of trainable parameters in a deep network model BIBREF16. In general, more data we have, better result are generated by supervised deep learning methods. Data augmentation helps to increase the number of training data, but a bigger dataset needs a better and most likely bigger network architecture in terms of generalization. Otherwise, the model might over-fitted or under-fitted on training data. Using our annotation software, we automatically extracted landmarks of 2000 images from the UOttawa database BIBREF14 had been annotated for image segmentation tasks. The database was randomly divided into three sets: 90$\%$ training, 5$\%$ validation, and 5$\%$ testing datasets. For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model. During the training process of TongueNet, we employed an online data augmentation, including rotation (-25 to 25 degrees), translation (-30 to 30 points in two directions), scaling (from 0.5 to 2 times), horizontal flipping, and combination of these transformations, and annotation point locations are also was transformed, correspondingly. From our extensive random search hyper-parameter tuning, learning rate, the number of iterations, mini-batch sizes, the number of epochs was determined as 0.0005, 1000, 30, 10, respectively. We deployed our experiments using Keras with TensorFlow as the backend on a Windows PC with Core i7, 4.2 GHz speed using one NVIDIA 1080 GPU unit, and 32 GB of RAM. Adam optimization with fixed momentum values of 0.9, was utilized for training. We trained and tested TongueNet with a different number of points as the output size. We first evaluated the scenario of equally spaced landmarks on the tongue surface. In this method, we captured all the points in equal distances respect to their neighbors. Our results from selecting five points to the number of pixels in the horizontal axis of the image (image width) revealed that the results of equally spaced selected points are not significant. Figure FIGREF3 shows randomly selected frames from real-time tracking of TongueNet using ultrasound tongue landmarks. As can be seen from the figure, automatically selection of points as annotation, which is used in many previous studies (see BIBREF17 for an example), can not provide accurate results. In a separate study, we extract annotation points from the same database using a randomly spaced selection method on tongue contours. We add restrictions for points that they should be at a minimum distance from each other as well as omitting outliers from the database. We saw that the optimum number of points was ten points for our experiments. Figure FIGREF4 shows some randomly selected results from training TongueNet on a randomly selected point database. From the figure, better accuracy of the TongueNet can be seen qualitatively. Note that we didn't apply any image enhancement or cropping for databases. To show the performance ability of TongueNet quantitatively, we first fitted B-spline curves using the OpenCV library on the instances of TongueNet. Then, we compared the value of the mean sum of distance (MSD) BIBREF8 for TongueNet, sUNET BIBREF11, UNET BIBREF18, BowNet BIBREF13, and IrisNet BIBREF14 deep learning models. From table TABREF6, it can be seen that TongueNet could reach to results similar to the state of the art deep learning models in this field. Note that there are some approximation errors for the curve-fitting procedure of the TongueNet and skeletonization process for extracting tongue contours from segmentation results of other deep learning models. We even tested TongueNet on a new database from UBC (the same database used in BIBREF14) to evaluate the generalization ability of the landmark tracking technique. Although TongueNet has not trained on that database, it could predict favorable instances for video frames with different data distributions. This shows the capability of TongueNet for managing of the over-fitting. From Table. TABREF6, the difference of MSD values is not significant between models, while IrisNet could find better MSD value. However, in terms of speed, TongueNet outperforms other deep learning models while post-processing time is not considered for them. We likewise tested TongueNet on a new database from the University of British Columbia (the same database used in BIBREF14) to evaluate the generalization ability of the landmark tracking technique. Although TongueNet has not trained on that database, it could predict favorable instances for that novel video frames with different data distributions. This shows the capability of TongueNet for managing of the over-fitting. From Table. TABREF6, although the difference of MSD values is not significant between models, IrisNet could find better MSD value. However, in terms of speed performance, TongueNet outperforms other deep learning models while post-processing time was not included for calculation of frame rates. Conclusion and Discussion In this paper, we presented TongueNet, a simple deep learning architecture, with a novel training approach for the problem of Tongue contour extraction and tracking. Unlike similar studies, we used several points to track on the tongue surface instead of the whole tongue contour region. In recent tongue contour tracking, for quantitative studies, a two-phase procedure is performed, including image segmentation and post-processing for extraction of tongue contour that this increase computational costs. Furthermore, available previously annotated tongue databases in articulation studies could not be utilized for deep learning techniques until now. Using TongueNet, we provided a new tool for this literature, while old databases now can be used for developing a better tongue contour tracking technique. All landmark locations in this study are annotated automatically with two different approaches, and we experienced that data augmentation has a significant rule in the accuracy of TongueNet. From our experimental results, we can anticipate that if an extensive manually annotated database, might be a combination of several databases, is employed for training of a deep learning model such as TongueNet, the accuracy of the model would be boosted considerably. The materials of this study will help researchers in different fields such as linguistics to study tongue gestures in real-time easier, accessible, and with higher accuracy than previous methods. The current infant TongueNet technique needs to be developed, trained, and extended as a fast, accurate, real-time, automatic method applicable for available ultrasound tongue databases.
the UBC database BIBREF14
03fb4b31742820df58504575c562bee672e016be
03fb4b31742820df58504575c562bee672e016be_0
Q: Do they address abstract meanings and concepts separately? Text: ...in the beginning was ⊗\otimes No physicists! ...the symbol INLINEFORM0 above does not stand for the operation that turns two Hilbert spaces into the smallest Hilbert space in which the two given ones bilinearly embed. No category-theoreticians! ...neither does it stand for the composition operation that turns any pair of objects (and morphisms) in a monoidal category into another object, and that is subject to a horrendous bunch of conditions that guaranty coherence with the remainder of the structure. Instead, this is what it means: INLINEFORM1 More specifically, it represents the togetherness of foo INLINEFORM0 and foo INLINEFORM1 without giving any specification of who/what foo INLINEFORM2 and foo INLINEFORM3 actually are. Differently put, it's the new stuff that emerges when foo INLINEFORM4 and foo INLINEFORM5 get together. If they don't like each other at all, this may be a fight. If they do like each other a lot, this may be a marriage, and a bit later, babies. Note that togetherness is vital for the emergence to actually take place, given that it is quite hard to either have a fight, a wedding, or a baby, if there is nobody else around. It is of course true that in von Neumann's formalisation of quantum theory the tensor product of Hilbert spaces (also denoted by INLINEFORM0 ) plays this role BIBREF0 , giving rise to the emergent phenomenon of entanglement BIBREF1 , BIBREF2 . And more generally, in category theory one can axiomatise composition of objects (again denoted by INLINEFORM1 ) within a symmetric monoidal category BIBREF3 , giving rise to elements that don't simply arise by pairing, just like in the case of the Hilbert space tensor product. However, in the case of von Neumann's formalisation of quantum theory we are talking about a formalisation which, despite being widely used, its creator von Neumann himself didn't even like BIBREF4 . Moreover, in this formalism INLINEFORM0 only arises as a secondary construct, requiring a detailed description of foo INLINEFORM1 and foo INLINEFORM2 , whose togetherness it describes. What we are after is a `foo-less' conception of INLINEFORM3 . The composition operation INLINEFORM4 in symmetric monoidal categories heads in that direction. However, by making an unnecessary commitment to set-theory, it makes things unnecessarily complicated BIBREF5 . Moreover, while this operation is general enough to accommodate togetherness, it doesn't really tell us anything about it. The title of this section is a metaphor aimed at confronting the complete disregard that the concept of togetherness has suffered in the sciences, and especially, in physics, where all of the effort has been on describing the individual, typically by breaking its description down to that of even smaller individuals. While, without any doubt, this has been a useful endeavour, it unfortunately has evolved in a rigid doctrine, leaving no space for anything else. The most extreme manifestation of this dogma is the use of the term `theory of everything' in particle physics. We will provide an alternative conceptual template for a theory of everything, supported not only by scientific examples, but also by everyday ones. Biology evolved from chopping up individual animals in laboratories, to considering them in the context of other other animals and varying environments. the result is the theory of evolution of species. Similarly, our current (still very poor) understanding of the human brain makes it clear that the human brain should not be studied as something in isolation, but as something that fundamentally requires interaction with other brains BIBREF6 . In contemporary audio equipment, music consists of nothing but a strings of zeros and ones. Instead, the entities that truly make up music are pitch, sound, rhythm, chord progression, crescendo, and so on. And in particular, music is not just a bag of these, since their intricate interaction is even more important than these constituents themselves. The same is true for film, where it isn't even that clear what it is made up from, but it does include such things as (easily replaceable) actors, decors, cameras, which all are part of a soup stirred by a director. But again, in contemporary video equipment, it is nothing but a string of zeros and ones. In fact, everything that goes on in pretty much all modern devices is nothing but zeros and ones. While it was Turing's brilliance to realise that this could in fact be done, and provided a foundation for the theory of computability BIBREF7 , this is in fact the only place where the zeros and ones are truly meaningful, in the form of a Turing machine. Elsewhere, it is nothing but a (universal) representation, with no conceptual qualities regarding the subject matter. Formalising togetherness 1: not there yet So, how does one go about formalising the concept of togetherness? While we don't want an explicit description of the foo involved, we do need some kind of means for identifying foo. Therefore, we simple give each foo a name, say INLINEFORM0 . Then, INLINEFORM1 represents the togetherness of INLINEFORM2 and INLINEFORM3 . We also don't want an explicit description of INLINEFORM4 , so how can we say anything about INLINEFORM5 without explicitly describing INLINEFORM6 , INLINEFORM7 and INLINEFORM8 ? Well, rather than describing these systems themselves, we could describe their relationships. For example, in a certain theory togetherness could obey the following equation: INLINEFORM0 That is, togetherness of two copies of something is exactly the same as a single copy, or in simpler terms, one is as good as two. For example, if one is in need of a plumber to fix a pipe, one only needs one. The only thing a second plumber would contribute is a bill for the time he wasted coming to your house. Obviously, this is not the kind of togetherness that we are really interested in, given that this kind adds nothing at all. A tiny bit more interesting is the case that two is as good as three: INLINEFORM0 e.g. when something needs to be carried on a staircase, but there really is only space for two people to be involved. Or, when INLINEFORM0 is female and INLINEFORM1 is male, and the goal is reproduction, we have: INLINEFORM2 (ignoring testosterone induced scuffles and the benefits of natural selection.) We really won't get very far this manner. One way in which things can be improved is by replacing equations by inequalities. For example, while: INLINEFORM0 simply means that one of the two is redundant, instead: INLINEFORM0 can mean that from INLINEFORM0 we can produce INLINEFORM1 , and: INLINEFORM2 can mean that from INLINEFORM0 and INLINEFORM1 together we can produce INLINEFORM2 , and: INLINEFORM3 can mean that in the presence of INLINEFORM0 from INLINEFORM1 we can produce INLINEFORM2 , i.e. that INLINEFORM3 is a catalyst. What we have now is a so-called resource theory, that is, a theory which captures how stuff we care about can be interconverted BIBREF8 . Resource theories allow for quantitative analysis, for example, in terms of a conversion rate: INLINEFORM0 So evidently we have some genuine substance now. Formalising togetherness 2: that's better But we can still do a lot better. What a resource theory fails to capture (on purpose in fact) is the actual process that converts one resource into another one. So let's fix that problem, and explicitly account for processes. In terms of togetherness, this means that we bring the fun foo INLINEFORM0 and foo INLINEFORM1 can have together explicitly in the picture. Let: INLINEFORM2 denote some process that transforms INLINEFORM0 into INLINEFORM1 .Then, given two such processes INLINEFORM2 and INLINEFORM3 we can also consider their togetherness: INLINEFORM4 Moreover, some processes can be sequentially chained: INLINEFORM0 We say `some', since INLINEFORM0 has to produce INLINEFORM1 , in order for: INLINEFORM2 to take place. Now, here one may end up in a bit of a mess if one isn't clever. In particular, with a bit of thinking one quickly realises that one wants some equations to be obeyed, for example: (f1f2)f3 = f1(f2f3) h(gf) = (hg)f and a bit more sophisticated, also: (g1g2)(f1f2)=(g1f1)(g2f2) There may even be some more equations that one wants to have, but which ones? This turns out to be a very difficult problem. Too difficult in the light of our limited existence in this world. The origin of this problem is that we treat INLINEFORM0 , and also INLINEFORM1 , as algebraic connectives, and that algebra has its roots in set-theory. The larger-than-life problem can be avoided in a manner that is equally elegant as it is simple. To state that things are together, we just write them down together: INLINEFORM0 There really is no reason to adjoin the symbol INLINEFORM0 between them. Now, this INLINEFORM1 and INLINEFORM2 will play the role of an input or an output of processes transforming them. Therefore, it will be useful to represent them by a wire: INLINEFORM3 Then, a process transforming INLINEFORM0 into INLINEFORM1 can be represented by a box: INLINEFORM2 Togetherness of processes now becomes: INLINEFORM0 and chaining processes becomes: INLINEFORM0 In particular, equations ( SECREF3 ), ( SECREF3 ) and ( SECREF3 ) become: INLINEFORM0 That is, all equations have become tautologies! Anti-cartesian togetherness One important kind of processes are states: INLINEFORM0 These are depicted without any inputs, where `no wire' can be read as `nothing' (or `no-foo'). The opposite notion is that of an effect, that is, a process without an output: INLINEFORM0 borrowing terminology from quantum theory. We can now identify those theories in which togetherness doesn't yield anything new. Life in such a world is pretty lonely... A theory of togetherness is cartesian if each state: INLINEFORM0 decomposes as follows: INLINEFORM0 So cartesianness means that all possible realisations of two foo-s can be achieved by pairing realisations of the individual foo-s involved. In short, a whole can be described in term of its parts, rendering togetherness a void concept. So very lonely and indeed... But, wait a minute. Why is it then the case that so much of traditional mathematics follows this cartesian template, and that even category theory for a long time has followed a strict cartesian stance? Beats me. Seriously...beats me! Anyway, an obvious consequence of this is that for those areas where togetherness is a genuinely non-trivial concept, traditional mathematical structures aren't always that useful. That is maybe why social sciences don't make much use of any kind of modern pure mathematics. And now for something completely different: A theory of togetherness is anti-cartesian if for each INLINEFORM0 there exists INLINEFORM1 , a special state INLINEFORM2 and a special effect INLINEFORM3 : INLINEFORM4 which are such that the following equation holds: yanking The reason for `anti' in the name of this kind of togetherness is the fact that when a theory of togetherness is both cartesian and anti-cartesian, then it is nothing but a theory of absolute death, i.e. it describes a world in which nothing ever happens. Indeed, we have: INLINEFORM0 That is, the identity is a constant process, always outputting the state INLINEFORM0 , independent of what the input is. And if that isn't weird enough, any arbitrary process INLINEFORM1 does the same: INLINEFORM2 Therefore, any anti-cartesian theory of togetherness that involves some aspect of change cannot be cartesian, and hence will have interesting stuff emerging from togetherness. Example 1: quantum theory Anti-cartesian togetherness is a very particular alternative to cartesian togetherness (contra any theory that fails to be cartesian). So one may wonder whether there are any interesting examples. And yes, there are! One example is quantum entanglement in quantum theory. That is in fact where the author's interest in anti-cartesian togetherness started BIBREF15 , BIBREF16 , BIBREF14 . As shown in these papers, equation ( SECREF4 ) pretty much embodies the phenomenon of quantum teleportation BIBREF19 . The full-blown description of quantum teleportation goes as follows BIBREF20 , BIBREF21 , BIBREF13 : telefull It is not important to fully understand the details here. What is important is to note that the bit of this diagram corresponding to equation ( SECREF4 ) is the bold wire which zig-zags through it: telefullbit The thin wires and the boxes labelled INLINEFORM0 are related to the fact that quantum theory is non-deterministic. By conditioning on particular measurement outcomes, teleportation simplifies to BIBREF13 : teledet Equality of the left-hand-side and of the right-hand-side follows directly from equation ( SECREF4 ). While in this picture we eliminated quantum non-determinism by conditioning on a measurement outcome, there still is something very `quantum' going on here: Alice's (conditioned) measurement is nothing like a passive observation, but a highly non-trivial intervention that makes Alice's state INLINEFORM1 appear at Bob's side: mapsto Let's analyse more carefully what's going on here by explicitly distinguishing the top layer and the bottom layer of this diagram: bottomtop The bottom part: bottom consists of the state INLINEFORM0 together with a special INLINEFORM1 -state, while the top part: top includes the corresponding INLINEFORM2 -effect, as well as an output. By making the bottom part and the top part interact, and, in particular, the INLINEFORM3 and the INLINEFORM4 , the state INLINEFORM5 ends up at the output of the top part. A more sophisticated variation on the same theme makes it much clearer which mechanism is going on here. Using equation ( SECREF4 ), the diagram: telecompl1pre reduces to: INLINEFORM0 The grey dot labeled INLINEFORM0 is some (at this point not important) unitary quantum operation BIBREF13 . Let us again consider the bottom and top parts: telecompl1 The top part is a far more sophisticated measurement consisting mainly of INLINEFORM1 -s. Also the bottom part is a lot more sophisticated, involving many INLINEFORM2 -s. These now cause a highly non-trivial interaction of the three states INLINEFORM3 , INLINEFORM4 and INLINEFORM5 . Why we have chosen this particular example will become clear in the next section. What is important to note is that the overall state and overall effect have to be chosen in a very particular way to create the desired interaction, similarly to an old-fashion telephone switchboard that has to be connected in a very precise manner in order to realise the right connection. Example 2: natural language meaning Another example of anti-cartesian togetherness is the manner in which word meanings interact in natural language! Given that logic originated in natural language, when Aristotle analysed arguments involving `and', `if...then', `or', etc., anti-cartesianness can be conceived as some new kind of logic! So what are INLINEFORM0 and INLINEFORM1 in this context? In order to understand what INLINEFORM0 is, we need to understand the mathematics of grammar. The study of the mathematical structure of grammar has indicated that the fundamental things making up sentences are not the words, but some atomic grammatical types, such as the noun-type and the sentence-type BIBREF23 , BIBREF24 , BIBREF25 . The transitive verb-type is not an atomic grammatical type, but a composite made up of two noun-types and one sentence-type. Hence, particularly interesting here is that atomic doesn't really mean smallest... On the other hand, just like in particle physics where we have particles and anti-particles, the atomic types include types as well as anti-types. But unlike in particle physics, there are two kinds of anti-types, namely left ones and right ones. This makes language even more non-commutative than quantum theory! All of this becomes much clearer when considering an example. Let INLINEFORM0 denote the atomic noun-type and let INLINEFORM1 and INLINEFORM2 be the corresponding anti-types. Let INLINEFORM3 denote the atomic sentence-type. Then the non-atomic transitive verb-type is INLINEFORM4 . Intuitively, it is easy to understand why. Consider a transitive verb, like `hate'. Then, simply saying `hate' doesn't convey any useful information, until, we also specify `whom' hates `whom'. That's exactly the role of the anti-types: they specify that in order to form a meaningful sentence, a noun is needed on the left, and a noun is needed on the right: INLINEFORM5 Then, INLINEFORM0 and INLINEFORM1 cancel out, and so do INLINEFORM2 and INLINEFORM3 . What remains is INLINEFORM4 , confirming that `Alice hates Bob' is a grammatically well-typed sentence. We can now depict the cancelations as follows: hates and bingo, we found INLINEFORM5 ! While the mathematics of sentence structure has been explored now for some 80 years, the fact that INLINEFORM0 -s can account for grammatical structure is merely a 15 years old idea BIBREF26 . So what are the INLINEFORM1 -s? That is an even more recent story in which we were involved, and in fact, for which we took inspiration from the story of the previous section BIBREF27 . While INLINEFORM2 -s are about grammar, INLINEFORM3 -s are about meaning. The distributional paradigm for natural language meaning states that meaning can be represented by vectors in a vector space BIBREF28 . Until recently, grammatical structure was essentially ignored in doing so, and in particular, there was no theory for how to compute the meaning of a sentence, given the meanings of its words. Our new compositional distributional model of meaning of BIBREF29 does exactly that. In order to explain how this compositional distributional model of meaning works, let's get back to our example. Since we have grammatical types around, the meaning vectors should respect grammatical structure, that is, the vectors representing compound types should themselves live in compound vector spaces. So the string of vectors representing the word meanings of our example would look as follows: hates2 Now we want to put forward a new hypothesis: Grammar is all about how word meanings interact. Inspired by the previous section, this can be realised as follows: hates3 where the INLINEFORM0 -s are now interpreted in exactly the same manner as in the previous section. And here is a more sophisticated example: hatescompl where the INLINEFORM1 -labeled grey circle should now be conceived as negating meaning BIBREF29 . The grammatical structure is here: hatescomplgram It is simply taken from a textbook such as BIBREF32 , the meanings of INLINEFORM2 , INLINEFORM3 and INLINEFORM4 can be automatically generated from some corpus, while the meanings of INLINEFORM5 and INLINEFORM6 are just cleverly chosen to be BIBREF33 , BIBREF29 : hatescomplclever In the previous section we already saw that in this way we obtain: hatescompl2 This indeed captures the intended meaning: INLINEFORM7 where we can think of INLINEFORM0 as being a predicate and INLINEFORM1 as being itself. So an interesting new aspect of the last example is that some of the meaning vectors of words are simply cleverly chosen, and in particular, involve INLINEFORM0 -s. Hence, we genuinely exploit full-blown anti-cartesianess. What anti-cartesianess does here is making sure that the transitive verb INLINEFORM1 `receives' INLINEFORM2 as its object. Note also how INLINEFORM3 does pretty much the same as INLINEFORM4 , guiding word meanings through the sentence, with, of course, one very important additional task: negating the sentence meaning. The cautious reader must of course have noticed that in the previous section we used thick wires, while here we used thin ones. Also, the dots in the full-blown description of quantum teleportation, which represent classical data operations, have vanished in this section. Meanwhile, thick wires as well as the dots all of these have acquired a vary natural role in a more refined model of natural language meaning. The dots allow to cleverly choose the meanings of relative pronouns BIBREF34 , BIBREF35 : relpron Thick wires (representing density matrices, rather than vectors BIBREF13 ) allow to encode word ambiguity as mixedness BIBREF36 , BIBREF37 . For example, the different meanings of the word INLINEFORM0 (a rock band, a person, a bee, a chess piece, or a drag —). Mixedness vanishes when providing a sufficient string of words that disambiguates that meaning, e.g.: queen while in the case of: queen2 we need more disambiguating words, since INLINEFORM1 can still refer to a person, a rock band, as well as a drag queen. Meaning is everything The distributional model of meaning BIBREF28 is very useful in that it allows for automation, given a substantially large corpus of text. However, from a conceptual point of view it is far from ideal. So one may ask the question: What is meaning? One may try to play around with a variety of mathematical structures. The method introduced in BIBREF29 doesn't really depend on how one models meaning, as long as we stick to anti-cartesian togetherness, or something sufficiently closely related BIBREF38 . It is an entertaining exercise to play around with the idea of what possibly could be the ultimate mathematical structure that captures meaning in natural language, until one realises that meaning in natural language truly encompasses everything. Indeed, we use language to talk about everything, e.g. logic, life, biology, physics, social behaviours, politics, so the ultimate model of meaning should encompass all of these fields. So, a theory of meaning in natural language is actually a theory of everything! Can we make sense of the template introduced in the previous section for meaning in natural language, as one for ... everything? Let us first investigate, whether the general distributional paradigm can be specialised to the variety of subject domains mentioned above. The manner in which the distributional model works, is that meanings are assigned relative to a fixed chosen set of context words. The meaning vector of any word then arises by counting the number of occurrences of that word in the close neighbourhood of each of the context words, within a large corpus of text. One can think of the context words as attributes, and the relative frequencies as the relevance of an attribute for the word. Simply by specialising the context words and the corpus, one can specialise to a certain subject domain. For example, if one is interested in social behaviours then the corpus could consist of social networking sites, and the context words could be chosen accordingly. This pragmatic approach allows for quantitative analysis, just like the compositional distributional model of BIBREF29 . Here's another example: hunts Here the meaning of pray could include specification of the available pray, and then the meaning of the sentence would capture the survival success of the lion, given the nature of the available pray. All together, the resulting meaning is the result of the interaction between a particular hunter, a particular pray, and the intricacies of the hunting process, which may depend on the particular environment in which it is taking place. It should be clear that again this situation is radically non-cartesian. Of course, if we now consider the example of quantum theory from two sections ago, the analogues to grammatical types are system types i.e. a specification of the kinds (incl. quantity) of systems that are involved. So it makes sense to refine the grammatical types according to the subject domain. Just like nouns in physics would involve specification of the kinds of systems involved, in biology, for example, this could involve specification of species, population size, environment, availability of food etc. Correspondingly, the top part would not just be restricted to grammatical interaction, but also domain specific interaction, just like in the case of quantum theory. All together, what we obtain is the following picture: general as a (very rough) template for a theory of everything. Acknowledgements The extrapolation of meaning beyond natural language was prompted by having to give a course in a workshop on Logics for Social Behaviour, organised by Alexander Kurz and Alessandra Palmigiano at the Lorentz centre in Leiden. The referee provided useful feedback—I learned a new word: `foo'.
Unanswerable
691cba5713c76a6870e35bc248ce1d29c0550bc7
691cba5713c76a6870e35bc248ce1d29c0550bc7_0
Q: Do they argue that all words can be derived from other (elementary) words? Text: ...in the beginning was ⊗\otimes No physicists! ...the symbol INLINEFORM0 above does not stand for the operation that turns two Hilbert spaces into the smallest Hilbert space in which the two given ones bilinearly embed. No category-theoreticians! ...neither does it stand for the composition operation that turns any pair of objects (and morphisms) in a monoidal category into another object, and that is subject to a horrendous bunch of conditions that guaranty coherence with the remainder of the structure. Instead, this is what it means: INLINEFORM1 More specifically, it represents the togetherness of foo INLINEFORM0 and foo INLINEFORM1 without giving any specification of who/what foo INLINEFORM2 and foo INLINEFORM3 actually are. Differently put, it's the new stuff that emerges when foo INLINEFORM4 and foo INLINEFORM5 get together. If they don't like each other at all, this may be a fight. If they do like each other a lot, this may be a marriage, and a bit later, babies. Note that togetherness is vital for the emergence to actually take place, given that it is quite hard to either have a fight, a wedding, or a baby, if there is nobody else around. It is of course true that in von Neumann's formalisation of quantum theory the tensor product of Hilbert spaces (also denoted by INLINEFORM0 ) plays this role BIBREF0 , giving rise to the emergent phenomenon of entanglement BIBREF1 , BIBREF2 . And more generally, in category theory one can axiomatise composition of objects (again denoted by INLINEFORM1 ) within a symmetric monoidal category BIBREF3 , giving rise to elements that don't simply arise by pairing, just like in the case of the Hilbert space tensor product. However, in the case of von Neumann's formalisation of quantum theory we are talking about a formalisation which, despite being widely used, its creator von Neumann himself didn't even like BIBREF4 . Moreover, in this formalism INLINEFORM0 only arises as a secondary construct, requiring a detailed description of foo INLINEFORM1 and foo INLINEFORM2 , whose togetherness it describes. What we are after is a `foo-less' conception of INLINEFORM3 . The composition operation INLINEFORM4 in symmetric monoidal categories heads in that direction. However, by making an unnecessary commitment to set-theory, it makes things unnecessarily complicated BIBREF5 . Moreover, while this operation is general enough to accommodate togetherness, it doesn't really tell us anything about it. The title of this section is a metaphor aimed at confronting the complete disregard that the concept of togetherness has suffered in the sciences, and especially, in physics, where all of the effort has been on describing the individual, typically by breaking its description down to that of even smaller individuals. While, without any doubt, this has been a useful endeavour, it unfortunately has evolved in a rigid doctrine, leaving no space for anything else. The most extreme manifestation of this dogma is the use of the term `theory of everything' in particle physics. We will provide an alternative conceptual template for a theory of everything, supported not only by scientific examples, but also by everyday ones. Biology evolved from chopping up individual animals in laboratories, to considering them in the context of other other animals and varying environments. the result is the theory of evolution of species. Similarly, our current (still very poor) understanding of the human brain makes it clear that the human brain should not be studied as something in isolation, but as something that fundamentally requires interaction with other brains BIBREF6 . In contemporary audio equipment, music consists of nothing but a strings of zeros and ones. Instead, the entities that truly make up music are pitch, sound, rhythm, chord progression, crescendo, and so on. And in particular, music is not just a bag of these, since their intricate interaction is even more important than these constituents themselves. The same is true for film, where it isn't even that clear what it is made up from, but it does include such things as (easily replaceable) actors, decors, cameras, which all are part of a soup stirred by a director. But again, in contemporary video equipment, it is nothing but a string of zeros and ones. In fact, everything that goes on in pretty much all modern devices is nothing but zeros and ones. While it was Turing's brilliance to realise that this could in fact be done, and provided a foundation for the theory of computability BIBREF7 , this is in fact the only place where the zeros and ones are truly meaningful, in the form of a Turing machine. Elsewhere, it is nothing but a (universal) representation, with no conceptual qualities regarding the subject matter. Formalising togetherness 1: not there yet So, how does one go about formalising the concept of togetherness? While we don't want an explicit description of the foo involved, we do need some kind of means for identifying foo. Therefore, we simple give each foo a name, say INLINEFORM0 . Then, INLINEFORM1 represents the togetherness of INLINEFORM2 and INLINEFORM3 . We also don't want an explicit description of INLINEFORM4 , so how can we say anything about INLINEFORM5 without explicitly describing INLINEFORM6 , INLINEFORM7 and INLINEFORM8 ? Well, rather than describing these systems themselves, we could describe their relationships. For example, in a certain theory togetherness could obey the following equation: INLINEFORM0 That is, togetherness of two copies of something is exactly the same as a single copy, or in simpler terms, one is as good as two. For example, if one is in need of a plumber to fix a pipe, one only needs one. The only thing a second plumber would contribute is a bill for the time he wasted coming to your house. Obviously, this is not the kind of togetherness that we are really interested in, given that this kind adds nothing at all. A tiny bit more interesting is the case that two is as good as three: INLINEFORM0 e.g. when something needs to be carried on a staircase, but there really is only space for two people to be involved. Or, when INLINEFORM0 is female and INLINEFORM1 is male, and the goal is reproduction, we have: INLINEFORM2 (ignoring testosterone induced scuffles and the benefits of natural selection.) We really won't get very far this manner. One way in which things can be improved is by replacing equations by inequalities. For example, while: INLINEFORM0 simply means that one of the two is redundant, instead: INLINEFORM0 can mean that from INLINEFORM0 we can produce INLINEFORM1 , and: INLINEFORM2 can mean that from INLINEFORM0 and INLINEFORM1 together we can produce INLINEFORM2 , and: INLINEFORM3 can mean that in the presence of INLINEFORM0 from INLINEFORM1 we can produce INLINEFORM2 , i.e. that INLINEFORM3 is a catalyst. What we have now is a so-called resource theory, that is, a theory which captures how stuff we care about can be interconverted BIBREF8 . Resource theories allow for quantitative analysis, for example, in terms of a conversion rate: INLINEFORM0 So evidently we have some genuine substance now. Formalising togetherness 2: that's better But we can still do a lot better. What a resource theory fails to capture (on purpose in fact) is the actual process that converts one resource into another one. So let's fix that problem, and explicitly account for processes. In terms of togetherness, this means that we bring the fun foo INLINEFORM0 and foo INLINEFORM1 can have together explicitly in the picture. Let: INLINEFORM2 denote some process that transforms INLINEFORM0 into INLINEFORM1 .Then, given two such processes INLINEFORM2 and INLINEFORM3 we can also consider their togetherness: INLINEFORM4 Moreover, some processes can be sequentially chained: INLINEFORM0 We say `some', since INLINEFORM0 has to produce INLINEFORM1 , in order for: INLINEFORM2 to take place. Now, here one may end up in a bit of a mess if one isn't clever. In particular, with a bit of thinking one quickly realises that one wants some equations to be obeyed, for example: (f1f2)f3 = f1(f2f3) h(gf) = (hg)f and a bit more sophisticated, also: (g1g2)(f1f2)=(g1f1)(g2f2) There may even be some more equations that one wants to have, but which ones? This turns out to be a very difficult problem. Too difficult in the light of our limited existence in this world. The origin of this problem is that we treat INLINEFORM0 , and also INLINEFORM1 , as algebraic connectives, and that algebra has its roots in set-theory. The larger-than-life problem can be avoided in a manner that is equally elegant as it is simple. To state that things are together, we just write them down together: INLINEFORM0 There really is no reason to adjoin the symbol INLINEFORM0 between them. Now, this INLINEFORM1 and INLINEFORM2 will play the role of an input or an output of processes transforming them. Therefore, it will be useful to represent them by a wire: INLINEFORM3 Then, a process transforming INLINEFORM0 into INLINEFORM1 can be represented by a box: INLINEFORM2 Togetherness of processes now becomes: INLINEFORM0 and chaining processes becomes: INLINEFORM0 In particular, equations ( SECREF3 ), ( SECREF3 ) and ( SECREF3 ) become: INLINEFORM0 That is, all equations have become tautologies! Anti-cartesian togetherness One important kind of processes are states: INLINEFORM0 These are depicted without any inputs, where `no wire' can be read as `nothing' (or `no-foo'). The opposite notion is that of an effect, that is, a process without an output: INLINEFORM0 borrowing terminology from quantum theory. We can now identify those theories in which togetherness doesn't yield anything new. Life in such a world is pretty lonely... A theory of togetherness is cartesian if each state: INLINEFORM0 decomposes as follows: INLINEFORM0 So cartesianness means that all possible realisations of two foo-s can be achieved by pairing realisations of the individual foo-s involved. In short, a whole can be described in term of its parts, rendering togetherness a void concept. So very lonely and indeed... But, wait a minute. Why is it then the case that so much of traditional mathematics follows this cartesian template, and that even category theory for a long time has followed a strict cartesian stance? Beats me. Seriously...beats me! Anyway, an obvious consequence of this is that for those areas where togetherness is a genuinely non-trivial concept, traditional mathematical structures aren't always that useful. That is maybe why social sciences don't make much use of any kind of modern pure mathematics. And now for something completely different: A theory of togetherness is anti-cartesian if for each INLINEFORM0 there exists INLINEFORM1 , a special state INLINEFORM2 and a special effect INLINEFORM3 : INLINEFORM4 which are such that the following equation holds: yanking The reason for `anti' in the name of this kind of togetherness is the fact that when a theory of togetherness is both cartesian and anti-cartesian, then it is nothing but a theory of absolute death, i.e. it describes a world in which nothing ever happens. Indeed, we have: INLINEFORM0 That is, the identity is a constant process, always outputting the state INLINEFORM0 , independent of what the input is. And if that isn't weird enough, any arbitrary process INLINEFORM1 does the same: INLINEFORM2 Therefore, any anti-cartesian theory of togetherness that involves some aspect of change cannot be cartesian, and hence will have interesting stuff emerging from togetherness. Example 1: quantum theory Anti-cartesian togetherness is a very particular alternative to cartesian togetherness (contra any theory that fails to be cartesian). So one may wonder whether there are any interesting examples. And yes, there are! One example is quantum entanglement in quantum theory. That is in fact where the author's interest in anti-cartesian togetherness started BIBREF15 , BIBREF16 , BIBREF14 . As shown in these papers, equation ( SECREF4 ) pretty much embodies the phenomenon of quantum teleportation BIBREF19 . The full-blown description of quantum teleportation goes as follows BIBREF20 , BIBREF21 , BIBREF13 : telefull It is not important to fully understand the details here. What is important is to note that the bit of this diagram corresponding to equation ( SECREF4 ) is the bold wire which zig-zags through it: telefullbit The thin wires and the boxes labelled INLINEFORM0 are related to the fact that quantum theory is non-deterministic. By conditioning on particular measurement outcomes, teleportation simplifies to BIBREF13 : teledet Equality of the left-hand-side and of the right-hand-side follows directly from equation ( SECREF4 ). While in this picture we eliminated quantum non-determinism by conditioning on a measurement outcome, there still is something very `quantum' going on here: Alice's (conditioned) measurement is nothing like a passive observation, but a highly non-trivial intervention that makes Alice's state INLINEFORM1 appear at Bob's side: mapsto Let's analyse more carefully what's going on here by explicitly distinguishing the top layer and the bottom layer of this diagram: bottomtop The bottom part: bottom consists of the state INLINEFORM0 together with a special INLINEFORM1 -state, while the top part: top includes the corresponding INLINEFORM2 -effect, as well as an output. By making the bottom part and the top part interact, and, in particular, the INLINEFORM3 and the INLINEFORM4 , the state INLINEFORM5 ends up at the output of the top part. A more sophisticated variation on the same theme makes it much clearer which mechanism is going on here. Using equation ( SECREF4 ), the diagram: telecompl1pre reduces to: INLINEFORM0 The grey dot labeled INLINEFORM0 is some (at this point not important) unitary quantum operation BIBREF13 . Let us again consider the bottom and top parts: telecompl1 The top part is a far more sophisticated measurement consisting mainly of INLINEFORM1 -s. Also the bottom part is a lot more sophisticated, involving many INLINEFORM2 -s. These now cause a highly non-trivial interaction of the three states INLINEFORM3 , INLINEFORM4 and INLINEFORM5 . Why we have chosen this particular example will become clear in the next section. What is important to note is that the overall state and overall effect have to be chosen in a very particular way to create the desired interaction, similarly to an old-fashion telephone switchboard that has to be connected in a very precise manner in order to realise the right connection. Example 2: natural language meaning Another example of anti-cartesian togetherness is the manner in which word meanings interact in natural language! Given that logic originated in natural language, when Aristotle analysed arguments involving `and', `if...then', `or', etc., anti-cartesianness can be conceived as some new kind of logic! So what are INLINEFORM0 and INLINEFORM1 in this context? In order to understand what INLINEFORM0 is, we need to understand the mathematics of grammar. The study of the mathematical structure of grammar has indicated that the fundamental things making up sentences are not the words, but some atomic grammatical types, such as the noun-type and the sentence-type BIBREF23 , BIBREF24 , BIBREF25 . The transitive verb-type is not an atomic grammatical type, but a composite made up of two noun-types and one sentence-type. Hence, particularly interesting here is that atomic doesn't really mean smallest... On the other hand, just like in particle physics where we have particles and anti-particles, the atomic types include types as well as anti-types. But unlike in particle physics, there are two kinds of anti-types, namely left ones and right ones. This makes language even more non-commutative than quantum theory! All of this becomes much clearer when considering an example. Let INLINEFORM0 denote the atomic noun-type and let INLINEFORM1 and INLINEFORM2 be the corresponding anti-types. Let INLINEFORM3 denote the atomic sentence-type. Then the non-atomic transitive verb-type is INLINEFORM4 . Intuitively, it is easy to understand why. Consider a transitive verb, like `hate'. Then, simply saying `hate' doesn't convey any useful information, until, we also specify `whom' hates `whom'. That's exactly the role of the anti-types: they specify that in order to form a meaningful sentence, a noun is needed on the left, and a noun is needed on the right: INLINEFORM5 Then, INLINEFORM0 and INLINEFORM1 cancel out, and so do INLINEFORM2 and INLINEFORM3 . What remains is INLINEFORM4 , confirming that `Alice hates Bob' is a grammatically well-typed sentence. We can now depict the cancelations as follows: hates and bingo, we found INLINEFORM5 ! While the mathematics of sentence structure has been explored now for some 80 years, the fact that INLINEFORM0 -s can account for grammatical structure is merely a 15 years old idea BIBREF26 . So what are the INLINEFORM1 -s? That is an even more recent story in which we were involved, and in fact, for which we took inspiration from the story of the previous section BIBREF27 . While INLINEFORM2 -s are about grammar, INLINEFORM3 -s are about meaning. The distributional paradigm for natural language meaning states that meaning can be represented by vectors in a vector space BIBREF28 . Until recently, grammatical structure was essentially ignored in doing so, and in particular, there was no theory for how to compute the meaning of a sentence, given the meanings of its words. Our new compositional distributional model of meaning of BIBREF29 does exactly that. In order to explain how this compositional distributional model of meaning works, let's get back to our example. Since we have grammatical types around, the meaning vectors should respect grammatical structure, that is, the vectors representing compound types should themselves live in compound vector spaces. So the string of vectors representing the word meanings of our example would look as follows: hates2 Now we want to put forward a new hypothesis: Grammar is all about how word meanings interact. Inspired by the previous section, this can be realised as follows: hates3 where the INLINEFORM0 -s are now interpreted in exactly the same manner as in the previous section. And here is a more sophisticated example: hatescompl where the INLINEFORM1 -labeled grey circle should now be conceived as negating meaning BIBREF29 . The grammatical structure is here: hatescomplgram It is simply taken from a textbook such as BIBREF32 , the meanings of INLINEFORM2 , INLINEFORM3 and INLINEFORM4 can be automatically generated from some corpus, while the meanings of INLINEFORM5 and INLINEFORM6 are just cleverly chosen to be BIBREF33 , BIBREF29 : hatescomplclever In the previous section we already saw that in this way we obtain: hatescompl2 This indeed captures the intended meaning: INLINEFORM7 where we can think of INLINEFORM0 as being a predicate and INLINEFORM1 as being itself. So an interesting new aspect of the last example is that some of the meaning vectors of words are simply cleverly chosen, and in particular, involve INLINEFORM0 -s. Hence, we genuinely exploit full-blown anti-cartesianess. What anti-cartesianess does here is making sure that the transitive verb INLINEFORM1 `receives' INLINEFORM2 as its object. Note also how INLINEFORM3 does pretty much the same as INLINEFORM4 , guiding word meanings through the sentence, with, of course, one very important additional task: negating the sentence meaning. The cautious reader must of course have noticed that in the previous section we used thick wires, while here we used thin ones. Also, the dots in the full-blown description of quantum teleportation, which represent classical data operations, have vanished in this section. Meanwhile, thick wires as well as the dots all of these have acquired a vary natural role in a more refined model of natural language meaning. The dots allow to cleverly choose the meanings of relative pronouns BIBREF34 , BIBREF35 : relpron Thick wires (representing density matrices, rather than vectors BIBREF13 ) allow to encode word ambiguity as mixedness BIBREF36 , BIBREF37 . For example, the different meanings of the word INLINEFORM0 (a rock band, a person, a bee, a chess piece, or a drag —). Mixedness vanishes when providing a sufficient string of words that disambiguates that meaning, e.g.: queen while in the case of: queen2 we need more disambiguating words, since INLINEFORM1 can still refer to a person, a rock band, as well as a drag queen. Meaning is everything The distributional model of meaning BIBREF28 is very useful in that it allows for automation, given a substantially large corpus of text. However, from a conceptual point of view it is far from ideal. So one may ask the question: What is meaning? One may try to play around with a variety of mathematical structures. The method introduced in BIBREF29 doesn't really depend on how one models meaning, as long as we stick to anti-cartesian togetherness, or something sufficiently closely related BIBREF38 . It is an entertaining exercise to play around with the idea of what possibly could be the ultimate mathematical structure that captures meaning in natural language, until one realises that meaning in natural language truly encompasses everything. Indeed, we use language to talk about everything, e.g. logic, life, biology, physics, social behaviours, politics, so the ultimate model of meaning should encompass all of these fields. So, a theory of meaning in natural language is actually a theory of everything! Can we make sense of the template introduced in the previous section for meaning in natural language, as one for ... everything? Let us first investigate, whether the general distributional paradigm can be specialised to the variety of subject domains mentioned above. The manner in which the distributional model works, is that meanings are assigned relative to a fixed chosen set of context words. The meaning vector of any word then arises by counting the number of occurrences of that word in the close neighbourhood of each of the context words, within a large corpus of text. One can think of the context words as attributes, and the relative frequencies as the relevance of an attribute for the word. Simply by specialising the context words and the corpus, one can specialise to a certain subject domain. For example, if one is interested in social behaviours then the corpus could consist of social networking sites, and the context words could be chosen accordingly. This pragmatic approach allows for quantitative analysis, just like the compositional distributional model of BIBREF29 . Here's another example: hunts Here the meaning of pray could include specification of the available pray, and then the meaning of the sentence would capture the survival success of the lion, given the nature of the available pray. All together, the resulting meaning is the result of the interaction between a particular hunter, a particular pray, and the intricacies of the hunting process, which may depend on the particular environment in which it is taking place. It should be clear that again this situation is radically non-cartesian. Of course, if we now consider the example of quantum theory from two sections ago, the analogues to grammatical types are system types i.e. a specification of the kinds (incl. quantity) of systems that are involved. So it makes sense to refine the grammatical types according to the subject domain. Just like nouns in physics would involve specification of the kinds of systems involved, in biology, for example, this could involve specification of species, population size, environment, availability of food etc. Correspondingly, the top part would not just be restricted to grammatical interaction, but also domain specific interaction, just like in the case of quantum theory. All together, what we obtain is the following picture: general as a (very rough) template for a theory of everything. Acknowledgements The extrapolation of meaning beyond natural language was prompted by having to give a course in a workshop on Logics for Social Behaviour, organised by Alexander Kurz and Alessandra Palmigiano at the Lorentz centre in Leiden. The referee provided useful feedback—I learned a new word: `foo'.
No
4542a4e7eabb8006fb7bcff2ca6347cfb3fbc56b
4542a4e7eabb8006fb7bcff2ca6347cfb3fbc56b_0
Q: Do they break down word meanings into elementary particles as in the standard model of quantum theory? Text: ...in the beginning was ⊗\otimes No physicists! ...the symbol INLINEFORM0 above does not stand for the operation that turns two Hilbert spaces into the smallest Hilbert space in which the two given ones bilinearly embed. No category-theoreticians! ...neither does it stand for the composition operation that turns any pair of objects (and morphisms) in a monoidal category into another object, and that is subject to a horrendous bunch of conditions that guaranty coherence with the remainder of the structure. Instead, this is what it means: INLINEFORM1 More specifically, it represents the togetherness of foo INLINEFORM0 and foo INLINEFORM1 without giving any specification of who/what foo INLINEFORM2 and foo INLINEFORM3 actually are. Differently put, it's the new stuff that emerges when foo INLINEFORM4 and foo INLINEFORM5 get together. If they don't like each other at all, this may be a fight. If they do like each other a lot, this may be a marriage, and a bit later, babies. Note that togetherness is vital for the emergence to actually take place, given that it is quite hard to either have a fight, a wedding, or a baby, if there is nobody else around. It is of course true that in von Neumann's formalisation of quantum theory the tensor product of Hilbert spaces (also denoted by INLINEFORM0 ) plays this role BIBREF0 , giving rise to the emergent phenomenon of entanglement BIBREF1 , BIBREF2 . And more generally, in category theory one can axiomatise composition of objects (again denoted by INLINEFORM1 ) within a symmetric monoidal category BIBREF3 , giving rise to elements that don't simply arise by pairing, just like in the case of the Hilbert space tensor product. However, in the case of von Neumann's formalisation of quantum theory we are talking about a formalisation which, despite being widely used, its creator von Neumann himself didn't even like BIBREF4 . Moreover, in this formalism INLINEFORM0 only arises as a secondary construct, requiring a detailed description of foo INLINEFORM1 and foo INLINEFORM2 , whose togetherness it describes. What we are after is a `foo-less' conception of INLINEFORM3 . The composition operation INLINEFORM4 in symmetric monoidal categories heads in that direction. However, by making an unnecessary commitment to set-theory, it makes things unnecessarily complicated BIBREF5 . Moreover, while this operation is general enough to accommodate togetherness, it doesn't really tell us anything about it. The title of this section is a metaphor aimed at confronting the complete disregard that the concept of togetherness has suffered in the sciences, and especially, in physics, where all of the effort has been on describing the individual, typically by breaking its description down to that of even smaller individuals. While, without any doubt, this has been a useful endeavour, it unfortunately has evolved in a rigid doctrine, leaving no space for anything else. The most extreme manifestation of this dogma is the use of the term `theory of everything' in particle physics. We will provide an alternative conceptual template for a theory of everything, supported not only by scientific examples, but also by everyday ones. Biology evolved from chopping up individual animals in laboratories, to considering them in the context of other other animals and varying environments. the result is the theory of evolution of species. Similarly, our current (still very poor) understanding of the human brain makes it clear that the human brain should not be studied as something in isolation, but as something that fundamentally requires interaction with other brains BIBREF6 . In contemporary audio equipment, music consists of nothing but a strings of zeros and ones. Instead, the entities that truly make up music are pitch, sound, rhythm, chord progression, crescendo, and so on. And in particular, music is not just a bag of these, since their intricate interaction is even more important than these constituents themselves. The same is true for film, where it isn't even that clear what it is made up from, but it does include such things as (easily replaceable) actors, decors, cameras, which all are part of a soup stirred by a director. But again, in contemporary video equipment, it is nothing but a string of zeros and ones. In fact, everything that goes on in pretty much all modern devices is nothing but zeros and ones. While it was Turing's brilliance to realise that this could in fact be done, and provided a foundation for the theory of computability BIBREF7 , this is in fact the only place where the zeros and ones are truly meaningful, in the form of a Turing machine. Elsewhere, it is nothing but a (universal) representation, with no conceptual qualities regarding the subject matter. Formalising togetherness 1: not there yet So, how does one go about formalising the concept of togetherness? While we don't want an explicit description of the foo involved, we do need some kind of means for identifying foo. Therefore, we simple give each foo a name, say INLINEFORM0 . Then, INLINEFORM1 represents the togetherness of INLINEFORM2 and INLINEFORM3 . We also don't want an explicit description of INLINEFORM4 , so how can we say anything about INLINEFORM5 without explicitly describing INLINEFORM6 , INLINEFORM7 and INLINEFORM8 ? Well, rather than describing these systems themselves, we could describe their relationships. For example, in a certain theory togetherness could obey the following equation: INLINEFORM0 That is, togetherness of two copies of something is exactly the same as a single copy, or in simpler terms, one is as good as two. For example, if one is in need of a plumber to fix a pipe, one only needs one. The only thing a second plumber would contribute is a bill for the time he wasted coming to your house. Obviously, this is not the kind of togetherness that we are really interested in, given that this kind adds nothing at all. A tiny bit more interesting is the case that two is as good as three: INLINEFORM0 e.g. when something needs to be carried on a staircase, but there really is only space for two people to be involved. Or, when INLINEFORM0 is female and INLINEFORM1 is male, and the goal is reproduction, we have: INLINEFORM2 (ignoring testosterone induced scuffles and the benefits of natural selection.) We really won't get very far this manner. One way in which things can be improved is by replacing equations by inequalities. For example, while: INLINEFORM0 simply means that one of the two is redundant, instead: INLINEFORM0 can mean that from INLINEFORM0 we can produce INLINEFORM1 , and: INLINEFORM2 can mean that from INLINEFORM0 and INLINEFORM1 together we can produce INLINEFORM2 , and: INLINEFORM3 can mean that in the presence of INLINEFORM0 from INLINEFORM1 we can produce INLINEFORM2 , i.e. that INLINEFORM3 is a catalyst. What we have now is a so-called resource theory, that is, a theory which captures how stuff we care about can be interconverted BIBREF8 . Resource theories allow for quantitative analysis, for example, in terms of a conversion rate: INLINEFORM0 So evidently we have some genuine substance now. Formalising togetherness 2: that's better But we can still do a lot better. What a resource theory fails to capture (on purpose in fact) is the actual process that converts one resource into another one. So let's fix that problem, and explicitly account for processes. In terms of togetherness, this means that we bring the fun foo INLINEFORM0 and foo INLINEFORM1 can have together explicitly in the picture. Let: INLINEFORM2 denote some process that transforms INLINEFORM0 into INLINEFORM1 .Then, given two such processes INLINEFORM2 and INLINEFORM3 we can also consider their togetherness: INLINEFORM4 Moreover, some processes can be sequentially chained: INLINEFORM0 We say `some', since INLINEFORM0 has to produce INLINEFORM1 , in order for: INLINEFORM2 to take place. Now, here one may end up in a bit of a mess if one isn't clever. In particular, with a bit of thinking one quickly realises that one wants some equations to be obeyed, for example: (f1f2)f3 = f1(f2f3) h(gf) = (hg)f and a bit more sophisticated, also: (g1g2)(f1f2)=(g1f1)(g2f2) There may even be some more equations that one wants to have, but which ones? This turns out to be a very difficult problem. Too difficult in the light of our limited existence in this world. The origin of this problem is that we treat INLINEFORM0 , and also INLINEFORM1 , as algebraic connectives, and that algebra has its roots in set-theory. The larger-than-life problem can be avoided in a manner that is equally elegant as it is simple. To state that things are together, we just write them down together: INLINEFORM0 There really is no reason to adjoin the symbol INLINEFORM0 between them. Now, this INLINEFORM1 and INLINEFORM2 will play the role of an input or an output of processes transforming them. Therefore, it will be useful to represent them by a wire: INLINEFORM3 Then, a process transforming INLINEFORM0 into INLINEFORM1 can be represented by a box: INLINEFORM2 Togetherness of processes now becomes: INLINEFORM0 and chaining processes becomes: INLINEFORM0 In particular, equations ( SECREF3 ), ( SECREF3 ) and ( SECREF3 ) become: INLINEFORM0 That is, all equations have become tautologies! Anti-cartesian togetherness One important kind of processes are states: INLINEFORM0 These are depicted without any inputs, where `no wire' can be read as `nothing' (or `no-foo'). The opposite notion is that of an effect, that is, a process without an output: INLINEFORM0 borrowing terminology from quantum theory. We can now identify those theories in which togetherness doesn't yield anything new. Life in such a world is pretty lonely... A theory of togetherness is cartesian if each state: INLINEFORM0 decomposes as follows: INLINEFORM0 So cartesianness means that all possible realisations of two foo-s can be achieved by pairing realisations of the individual foo-s involved. In short, a whole can be described in term of its parts, rendering togetherness a void concept. So very lonely and indeed... But, wait a minute. Why is it then the case that so much of traditional mathematics follows this cartesian template, and that even category theory for a long time has followed a strict cartesian stance? Beats me. Seriously...beats me! Anyway, an obvious consequence of this is that for those areas where togetherness is a genuinely non-trivial concept, traditional mathematical structures aren't always that useful. That is maybe why social sciences don't make much use of any kind of modern pure mathematics. And now for something completely different: A theory of togetherness is anti-cartesian if for each INLINEFORM0 there exists INLINEFORM1 , a special state INLINEFORM2 and a special effect INLINEFORM3 : INLINEFORM4 which are such that the following equation holds: yanking The reason for `anti' in the name of this kind of togetherness is the fact that when a theory of togetherness is both cartesian and anti-cartesian, then it is nothing but a theory of absolute death, i.e. it describes a world in which nothing ever happens. Indeed, we have: INLINEFORM0 That is, the identity is a constant process, always outputting the state INLINEFORM0 , independent of what the input is. And if that isn't weird enough, any arbitrary process INLINEFORM1 does the same: INLINEFORM2 Therefore, any anti-cartesian theory of togetherness that involves some aspect of change cannot be cartesian, and hence will have interesting stuff emerging from togetherness. Example 1: quantum theory Anti-cartesian togetherness is a very particular alternative to cartesian togetherness (contra any theory that fails to be cartesian). So one may wonder whether there are any interesting examples. And yes, there are! One example is quantum entanglement in quantum theory. That is in fact where the author's interest in anti-cartesian togetherness started BIBREF15 , BIBREF16 , BIBREF14 . As shown in these papers, equation ( SECREF4 ) pretty much embodies the phenomenon of quantum teleportation BIBREF19 . The full-blown description of quantum teleportation goes as follows BIBREF20 , BIBREF21 , BIBREF13 : telefull It is not important to fully understand the details here. What is important is to note that the bit of this diagram corresponding to equation ( SECREF4 ) is the bold wire which zig-zags through it: telefullbit The thin wires and the boxes labelled INLINEFORM0 are related to the fact that quantum theory is non-deterministic. By conditioning on particular measurement outcomes, teleportation simplifies to BIBREF13 : teledet Equality of the left-hand-side and of the right-hand-side follows directly from equation ( SECREF4 ). While in this picture we eliminated quantum non-determinism by conditioning on a measurement outcome, there still is something very `quantum' going on here: Alice's (conditioned) measurement is nothing like a passive observation, but a highly non-trivial intervention that makes Alice's state INLINEFORM1 appear at Bob's side: mapsto Let's analyse more carefully what's going on here by explicitly distinguishing the top layer and the bottom layer of this diagram: bottomtop The bottom part: bottom consists of the state INLINEFORM0 together with a special INLINEFORM1 -state, while the top part: top includes the corresponding INLINEFORM2 -effect, as well as an output. By making the bottom part and the top part interact, and, in particular, the INLINEFORM3 and the INLINEFORM4 , the state INLINEFORM5 ends up at the output of the top part. A more sophisticated variation on the same theme makes it much clearer which mechanism is going on here. Using equation ( SECREF4 ), the diagram: telecompl1pre reduces to: INLINEFORM0 The grey dot labeled INLINEFORM0 is some (at this point not important) unitary quantum operation BIBREF13 . Let us again consider the bottom and top parts: telecompl1 The top part is a far more sophisticated measurement consisting mainly of INLINEFORM1 -s. Also the bottom part is a lot more sophisticated, involving many INLINEFORM2 -s. These now cause a highly non-trivial interaction of the three states INLINEFORM3 , INLINEFORM4 and INLINEFORM5 . Why we have chosen this particular example will become clear in the next section. What is important to note is that the overall state and overall effect have to be chosen in a very particular way to create the desired interaction, similarly to an old-fashion telephone switchboard that has to be connected in a very precise manner in order to realise the right connection. Example 2: natural language meaning Another example of anti-cartesian togetherness is the manner in which word meanings interact in natural language! Given that logic originated in natural language, when Aristotle analysed arguments involving `and', `if...then', `or', etc., anti-cartesianness can be conceived as some new kind of logic! So what are INLINEFORM0 and INLINEFORM1 in this context? In order to understand what INLINEFORM0 is, we need to understand the mathematics of grammar. The study of the mathematical structure of grammar has indicated that the fundamental things making up sentences are not the words, but some atomic grammatical types, such as the noun-type and the sentence-type BIBREF23 , BIBREF24 , BIBREF25 . The transitive verb-type is not an atomic grammatical type, but a composite made up of two noun-types and one sentence-type. Hence, particularly interesting here is that atomic doesn't really mean smallest... On the other hand, just like in particle physics where we have particles and anti-particles, the atomic types include types as well as anti-types. But unlike in particle physics, there are two kinds of anti-types, namely left ones and right ones. This makes language even more non-commutative than quantum theory! All of this becomes much clearer when considering an example. Let INLINEFORM0 denote the atomic noun-type and let INLINEFORM1 and INLINEFORM2 be the corresponding anti-types. Let INLINEFORM3 denote the atomic sentence-type. Then the non-atomic transitive verb-type is INLINEFORM4 . Intuitively, it is easy to understand why. Consider a transitive verb, like `hate'. Then, simply saying `hate' doesn't convey any useful information, until, we also specify `whom' hates `whom'. That's exactly the role of the anti-types: they specify that in order to form a meaningful sentence, a noun is needed on the left, and a noun is needed on the right: INLINEFORM5 Then, INLINEFORM0 and INLINEFORM1 cancel out, and so do INLINEFORM2 and INLINEFORM3 . What remains is INLINEFORM4 , confirming that `Alice hates Bob' is a grammatically well-typed sentence. We can now depict the cancelations as follows: hates and bingo, we found INLINEFORM5 ! While the mathematics of sentence structure has been explored now for some 80 years, the fact that INLINEFORM0 -s can account for grammatical structure is merely a 15 years old idea BIBREF26 . So what are the INLINEFORM1 -s? That is an even more recent story in which we were involved, and in fact, for which we took inspiration from the story of the previous section BIBREF27 . While INLINEFORM2 -s are about grammar, INLINEFORM3 -s are about meaning. The distributional paradigm for natural language meaning states that meaning can be represented by vectors in a vector space BIBREF28 . Until recently, grammatical structure was essentially ignored in doing so, and in particular, there was no theory for how to compute the meaning of a sentence, given the meanings of its words. Our new compositional distributional model of meaning of BIBREF29 does exactly that. In order to explain how this compositional distributional model of meaning works, let's get back to our example. Since we have grammatical types around, the meaning vectors should respect grammatical structure, that is, the vectors representing compound types should themselves live in compound vector spaces. So the string of vectors representing the word meanings of our example would look as follows: hates2 Now we want to put forward a new hypothesis: Grammar is all about how word meanings interact. Inspired by the previous section, this can be realised as follows: hates3 where the INLINEFORM0 -s are now interpreted in exactly the same manner as in the previous section. And here is a more sophisticated example: hatescompl where the INLINEFORM1 -labeled grey circle should now be conceived as negating meaning BIBREF29 . The grammatical structure is here: hatescomplgram It is simply taken from a textbook such as BIBREF32 , the meanings of INLINEFORM2 , INLINEFORM3 and INLINEFORM4 can be automatically generated from some corpus, while the meanings of INLINEFORM5 and INLINEFORM6 are just cleverly chosen to be BIBREF33 , BIBREF29 : hatescomplclever In the previous section we already saw that in this way we obtain: hatescompl2 This indeed captures the intended meaning: INLINEFORM7 where we can think of INLINEFORM0 as being a predicate and INLINEFORM1 as being itself. So an interesting new aspect of the last example is that some of the meaning vectors of words are simply cleverly chosen, and in particular, involve INLINEFORM0 -s. Hence, we genuinely exploit full-blown anti-cartesianess. What anti-cartesianess does here is making sure that the transitive verb INLINEFORM1 `receives' INLINEFORM2 as its object. Note also how INLINEFORM3 does pretty much the same as INLINEFORM4 , guiding word meanings through the sentence, with, of course, one very important additional task: negating the sentence meaning. The cautious reader must of course have noticed that in the previous section we used thick wires, while here we used thin ones. Also, the dots in the full-blown description of quantum teleportation, which represent classical data operations, have vanished in this section. Meanwhile, thick wires as well as the dots all of these have acquired a vary natural role in a more refined model of natural language meaning. The dots allow to cleverly choose the meanings of relative pronouns BIBREF34 , BIBREF35 : relpron Thick wires (representing density matrices, rather than vectors BIBREF13 ) allow to encode word ambiguity as mixedness BIBREF36 , BIBREF37 . For example, the different meanings of the word INLINEFORM0 (a rock band, a person, a bee, a chess piece, or a drag —). Mixedness vanishes when providing a sufficient string of words that disambiguates that meaning, e.g.: queen while in the case of: queen2 we need more disambiguating words, since INLINEFORM1 can still refer to a person, a rock band, as well as a drag queen. Meaning is everything The distributional model of meaning BIBREF28 is very useful in that it allows for automation, given a substantially large corpus of text. However, from a conceptual point of view it is far from ideal. So one may ask the question: What is meaning? One may try to play around with a variety of mathematical structures. The method introduced in BIBREF29 doesn't really depend on how one models meaning, as long as we stick to anti-cartesian togetherness, or something sufficiently closely related BIBREF38 . It is an entertaining exercise to play around with the idea of what possibly could be the ultimate mathematical structure that captures meaning in natural language, until one realises that meaning in natural language truly encompasses everything. Indeed, we use language to talk about everything, e.g. logic, life, biology, physics, social behaviours, politics, so the ultimate model of meaning should encompass all of these fields. So, a theory of meaning in natural language is actually a theory of everything! Can we make sense of the template introduced in the previous section for meaning in natural language, as one for ... everything? Let us first investigate, whether the general distributional paradigm can be specialised to the variety of subject domains mentioned above. The manner in which the distributional model works, is that meanings are assigned relative to a fixed chosen set of context words. The meaning vector of any word then arises by counting the number of occurrences of that word in the close neighbourhood of each of the context words, within a large corpus of text. One can think of the context words as attributes, and the relative frequencies as the relevance of an attribute for the word. Simply by specialising the context words and the corpus, one can specialise to a certain subject domain. For example, if one is interested in social behaviours then the corpus could consist of social networking sites, and the context words could be chosen accordingly. This pragmatic approach allows for quantitative analysis, just like the compositional distributional model of BIBREF29 . Here's another example: hunts Here the meaning of pray could include specification of the available pray, and then the meaning of the sentence would capture the survival success of the lion, given the nature of the available pray. All together, the resulting meaning is the result of the interaction between a particular hunter, a particular pray, and the intricacies of the hunting process, which may depend on the particular environment in which it is taking place. It should be clear that again this situation is radically non-cartesian. Of course, if we now consider the example of quantum theory from two sections ago, the analogues to grammatical types are system types i.e. a specification of the kinds (incl. quantity) of systems that are involved. So it makes sense to refine the grammatical types according to the subject domain. Just like nouns in physics would involve specification of the kinds of systems involved, in biology, for example, this could involve specification of species, population size, environment, availability of food etc. Correspondingly, the top part would not just be restricted to grammatical interaction, but also domain specific interaction, just like in the case of quantum theory. All together, what we obtain is the following picture: general as a (very rough) template for a theory of everything. Acknowledgements The extrapolation of meaning beyond natural language was prompted by having to give a course in a workshop on Logics for Social Behaviour, organised by Alexander Kurz and Alessandra Palmigiano at the Lorentz centre in Leiden. The referee provided useful feedback—I learned a new word: `foo'.
No
af8d3ee6a282aaa885e9126aa4bcb08ac68837e0
af8d3ee6a282aaa885e9126aa4bcb08ac68837e0_0
Q: How big is the dataset used? Text: Introduction Video captioning has drawn more attention and shown promising results recently. To translate content-rich video into human language is a extremely complex task, which should not only extract abundant multi-modal information from video but also cross the semantic gap to generate accurate and fluent language. Thanks to the recent developments of useful deep learning frameworks, such as LSTM BIBREF1 networks, as well as of machine translation techniques such as BIBREF2, the dominant approach in video captioning is currently based on sequence learning using an encoder-decoder framework. In encoding phase, the main task is to well represent given videos general including appearance, motion, audio even speech information. There are many pretrained models can be used to extract above features. In this report, we illustrate our system how to represent videos in detail and use video topic as a global semantic clue to guide better alignment. In decoding phase, conventional models follow encoder-decoder framework almost predict the next word conditioned on context information and the previous word. Furthermore, the previous word should be ground truth word at training step but model generated word at inference. As a result, the previous word at training and inference are drawn from different distributions, namely, from the data distribution as opposed to the model distribution. This discrepancy, called exposure bias BIBREF3 leads to a gap between training and inference. Meanwhile, most models apply cross-entropy loss as their optimization objective, but typically evaluate at inference using discrete and non-differentiable NLP metrics. For above reasons, we apply multi-stage training strategy to train our model to avoid exposure bias problem and directly optimize metrics for the task at hand. Experiments prove that our strategy can obtain steady and significant improvement during training and testing time. Multi-modal Video Representations We extract the video representations from multiple clues including appearance, motion and audio. We also use video topic to provide global information for specific videos. Given one video, we uniformly extract 28 frames as key frames, then select 16 frames around the keyframes as segments. As for those videos whose number of frame less than 28, above selection will be looped. Appearance feature extracted from each frames can reflect global information of these key frames. To extract appearance-based representations from videos, we apply ResNet-152 pretrained on ImageNet dataset. We also attempt some deeper and wider networks InceptionResNet, they barely have improvement for the final result. In order to model the motion information of each segments in video, we use I3D pretrained on Kinetics-600 dataset, which exactly has the same data distribution with VATEX dataset. As for audio feature, though use it alone can not get very good result for video captioning, it can be seen as powerful additional feature, which can provide more discriminative information for some similar content videos. We build audio feature extractor based on VGGish network, which is a variant of the VGG network described in BIBREF4. First, we extract MEL-spectrogram patches for each of the input audio. The sample rate of the audio is fixed at 16 KHz. The STFT window length is 25 ms and top length is 10 ms. The number of Mel filters is 64. We uniformly sample 28 patches for computing MEL-spectrogram. We then transfer learn from an existing VGGish model which is pretrained on the Audioset dataset BIBREF5. Specifically, we fine-tune this pretrained VGGish model on VATEX training set for 10 epochs. The input size is $ 96\times 64 $ for log MEL-spectrogram audio inputs. The last group of convolutional and maxpooling layers are replaced by an embedding layer on the Mel features of size 128. We take this compact embedding layer’s output as our audio feature. In the end, we get $ 28\times 2048 $ appearance features, $ 28\times 1024 $ motion features and $ 28\times 128 $ audio features for each video. Note that each multi-modal feature should be aligned at the same frame to ensure temporal consistency. Inspired by the Wang's work BIBREF6, we found that topic plays an essential role for video captioning. From intuitive understanding, topic can provide global information for specific videos. Topic also can be seen as a cluster, video of the same class always has the similar semantic attributes. We conduct topic-embedding and label-embedding following the same method reported by Wang BIBREF6. Multi-stage Training Strategy In the fist stage, we also apply teacher-forced method to directly optimize the cross-entropy loss. It is necessary to warm-up model during this step. In the second step, we utilize word-level oracle method BIBREF7 to replace conventional scheduled sampling method BIBREF8. This method mainly consists of two steps: oracle word selection and sampling with decay. In practice, by introducing the Gumbel-Max technique we can acquire more robust word-level oracles, which provides a simple and efficient way to sample from a categorical distribution. What's more, the sampling curve is smoother than scheduled sampling method due to its specially designed sampling function. This step can obviously alleviate the problem of overfitting and improve the exploration ability of model. It's time to go into the third step when the curve of CIDEr BIBREF9 metric is no longer growing for 3 epochs. To avoid exposure bias problem, self-critical reinforcement algorithm BIBREF10 directly optimizes metrics of captioning task. In this work, CIDEr BIBREF9 and BLEU BIBREF11 are equally optimized after the whole sentence generating. This step allow us to more effectively train on non-differentiable metrics, and leads to significant improvements in captioning performance on VATEX. Experiments ::: Dataset We utilize the VATEX dataset for video captioning, which contains over 41,250 videos and 825,000 captions in both English and Chinese. Among the captions, there are over 206,000 English-Chinese parallel translation pairs. It covers 600 human activities and a variety of video content. Each video is paired with 10 English and 10 Chinese diverse captions. We follow the official split with 25,991 videos for training, 3,000 videos for validation and 6,000 public test videos for final testing. Experiments ::: System The overall video captioning framework is illustrated in Figure FIGREF1. In general, it is composed of two components: 1) Multi-modal video encoder; 2) Top-down BIBREF12 based decoder. In decoding phase, because of the large distribution difference between vision and audio data, we leverage two one-layer LSTM-based architectures to process these two parts of data separately, namely Vision-LSTM and Audio-LSTM. As for vision processing, embedded appearance and motion features are concatenated and then input to Vision-LSTM of 512-D, embedded audio features are directly fed into Audio-LSTM of 128-D. In this way, we can obtain context features with sequential information. During the decoding phase, a top-down based captioning architecture is to be adopted. Attention-LSTM using global video topic and last generated word to guide temporal attention modules to select the most relevant vision and audio regions. Specifically, there are two independent attention modules applying soft-attention to score corresponding regions with topic guide. Meanwhile, Language-LSTM assembles both processed vision and audio context information to generate next word. Experiments ::: Evaluations of Video Captioning System For this task, four common metrics including BLEU-4, METEOR, CIDEr and ROUGE-L are evaluated. In this subsection, we mainly show steady and significant improvement with different training stage as shown in Table TABREF2. Conclusion In this report, we explain our designed video captioning system in general. Multi-modal information including appearance, motion and audio are extracted to better represent videos. In order to tackle exposure bias and overfitting problem, we utilize several multi-stage training strategies to train our model. Both Chinese and English tracks are all following the above methods. The experiment proves that our methods can obtain steady and significant captioning performance.
over 41,250 videos and 825,000 captions in both English and Chinese., over 206,000 English-Chinese parallel translation pairs
0e510d918456f3d2b390b501a145d92c4f125835
0e510d918456f3d2b390b501a145d92c4f125835_0
Q: How they prove that multi-head self-attention is at least as powerful as convolution layer? Text: Introduction Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer BIBREF1. Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 BIBREF2, BERT BIBREF3 and Transformer-XL BIBREF4, seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks BIBREF5 and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism—originally introduced in Neural Machine Translation to better handle long-range dependencies BIBREF6. With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest. Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks. Self-attention was first added to CNN by either using channel-based attention BIBREF7 or non-local relationships across the image BIBREF8. More recently, BIBREF9 augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks. Interestingly, BIBREF0 noticed that, even though state-of-the art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy. These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers? From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function—including a CNN. Indeed, BIBREF10 showed that a multi-layer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic. Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open. Introduction ::: Contributions. In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers: From a theoretical perspective, we provide a constructive proof showing that self-attention layers can express any convolutional layers. Specifically, we show that a single multi-head self-attention layer using relative positional encoding can be re-parametrized to express any convolutional layer. Our insights lead to a relative positional encoding, that we refer to as quadratic encoding, that is very efficient in terms of size. Our experiments show that the first few layers of attention-only architectures BIBREF0 do learn to attend on grid-like pattern around each query pixel, similar to our theoretical construction. Strikingly, this behavior is confirmed both for our quadratic encoding, but also for relative encoding that is learned during training. Our results seem to suggest that localized convolution is the right inductive bias for the first few layers of an image classifying network. For deeper layers, on the other hand, long-range as well as horizontally-symmetric inter-dependencies become more relevant. For reproducibility purposes, our code is publicly available on GitHub. Background on Attention Mechanisms for Vision We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings. Background on Attention Mechanisms for Vision ::: The Multi-Head Self-Attention Layer Let $\in ^{T\times D_{\textit {in}}}$ be an input matrix consisting of $T$ tokens in of ${D_{\textit {in}}}$ dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \in [T]$ from $D_{\textit {in}}$ to $D_{\textit {out}}$ dimensions as follows: Self-Attention()t,: := ( t,: ) val, where we refer to the elements of the $T \times T$ matrix := qrykey as attention scores and the softmax output as attention probabilities. The layer is parametrized by a query matrix $_{\!\textit {qry}}\in ^{D_{\textit {in}} \times D_{k}}$, a key matrix $_{\!\textit {key}}\in ^{D_{\textit {in}} \times D_{k}}$ and a value matrix $_{\!\textit {val}}\in ^{D_{\textit {in}} \times D_{\textit {out}}}$.For simplicity, we exclude any residual connections, batch normalization and constant factors. A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention := (+ ) qrykey(+ ), where $\in ^{T \times D_{\textit {in}}}$ contains the embedding vectors for each position. More generally, $$ may be substituted by any function that returns a vector representation of the position. It has been found beneficial in practice to replicate this self-attention mechanism into multiple heads, each being able to focus on different parts of the input by using different query, key and value matrices. In multi-head self-attention, the output of the $N_h$ heads of output dimension $D_h$ are concatenated and projected to dimension $D_{\textit {out}}$ as follows: MHSA() := *concath [Nh][Self-Attentionh()] out + out and two new parameters are introduced: the projection matrix $_{\!\textit {out}} \in ^{N_h D_h \times D_{\textit {out}}}$ and a bias term $_{\textit {out}}\in ^{D_{\textit {out}}}$. Background on Attention Mechanisms for Vision ::: Attention for Images Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor $~\in ~^{W\times H \times D_{\textit {in}}}$ of width $W$, height $H$ and $D_{\textit {in}}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by Conv()i,j,: := (1, 2) K 1,2,:,: i+1, j+2, : + , where $$ is the $K \times K \times D_{\textit {out}} \times D_{\textit {in}}$ weight tensor , $\in ^{D_{\textit {out}}}$ is the bias vector and the set contains all possible shifts appearing when convolving the image with a $K\times K$ kernel. In the following, we review how self-attention can be adapted from 1D sequences to images. With images, rather than tokens, we have query and key pixels $, \in [W] \times [H]$. Accordingly, the input is a tensor $$ of dimension $W \times H \times D_{\textit {in}}$ and each attention score associates a query and a key pixel. To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if $= (i,j)$, we write $_{,:}$ and $_{,:}$ to mean $_{i, j,:}$ and $_{i, j,:,:}$, respectively. With this notation in place, the multi-head self attention layer output at pixel $$ can be expressed as follows: Self-Attention(),: = ( ,: ) ,: val and accordingly for the multi-head case. Background on Attention Mechanisms for Vision ::: Positional Encoding for Images There are two types of positional encoding that has been used in transformer-based architectures: the absolute and relative encoding (see also tab:relworkattention in the Appendix). With absolute encodings, a (fixed or learned) vector $_{,:}$ is assigned to each pixel $$. The computation of the attention scores we saw in eq:attcoeff can then be decomposed as follows: , abs = (,: + ,:) qrykey(,: + ,:) = ,: qrykey,: + ,: qrykey,: + ,:qrykey,: + ,: qrykey,: where $$ and $$ correspond to the query and key pixels, respectively. The relative positional encoding was introduced by BIBREF4. The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel: , rel := ,: qry key ,: + ,: qry key + key ,: + key In this manner, the attention scores only depend on the shift ${{\delta }}:= - $. Above, the learnable vectors $$ and $$ are unique for each head, whereas for every shift ${{\delta }}$ the relative positional encoding $_{{{\delta }}} \in ^{D_p}$ is shared by all layers and heads. Moreover, now the key weights are split into two types: $_{\!\textit {key}}$ pertain to the input and $\widehat{}_{\!\textit {key}}$ to the relative position of pixels. Self-Attention as a Convolutional Layer This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer. Our main result is the following: Theorem 1 A multi-head self-attention layer with $N_h$ heads of dimension $D_h$, output dimension $D_{\textit {out}}$ and a relative positional encoding of dimension $D_p \ge 3$ can express any convolutional layer of kernel size $\sqrt{N_h} \times \sqrt{N_h}$ and $\min (D_h, D_{\textit {out}})$ output channels. The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\Delta \!\!\!\!\Delta _K = \lbrace -\lfloor K/2 \rfloor , \dots , \lfloor K/2 \rfloor \rbrace ^2$ of all pixel shifts in a $K\times K$ kernel. The exact condition can be found in the statement of Lemma UNKREF15. Then, Lemma UNKREF17 shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the quadratic encoding: (h) := -(h) (1, -21(h), -22(h)) := (2 , 1, 2) qry = key := 0 key := The learned parameters ${{\Delta }}^{(h)} = ({{\Delta }}^{(h)}_1, {{\Delta }}^{(h)}_2)$ and $\alpha ^{(h)}$ determine the center and width of attention of each head, respectively. On the other hand, ${{\delta }}= ({{\delta }}_1, {{\delta }}_2)$ is fixed and expresses the relative shift between query and key pixels. It is important to stress that the above encoding is not the only one for which the conditions of Lemma UNKREF15 are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only $D_p = 3$ dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one). Though we lack a formal proof, we conjecture that every encoding that satisfies Lemma UNKREF15 should have at least three dimensions. Self-Attention as a Convolutional Layer ::: Remark for the 1D case. Convolutional layers acting on sequences are commonly used in the literature for text BIBREF11, as well as audio BIBREF12 and time series BIBREF13. Theorem UNKREF11 can be straightforwardly extended to show that multi-head self-attention with $N_h$ heads can also simulate a 1D convolutional layer with a kernel of size $K=N_h$ with $\min (D_h, D_{\textit {out}})$ output channels using a positional encoding of dimension $D_p \ge 2$. Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence—only that it has the capacity to do so. Self-Attention as a Convolutional Layer ::: Proof of Main Theorem The proof follows directly from Lemmas UNKREF15 and UNKREF17 stated below: Lemma 1 Consider a multi-head self-attention layer consisting of $N_h = K^2$ heads, $D_h \ge D_{\textit {out}}$ and let $~:~[N_h]~\rightarrow ~{\Delta \!\!\!\!\Delta }_K$ be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds: ((h),:) = {ll 1 if (h) = - 0 otherwise. . Then, for any convolutional layer with a $K \times K$ kernel and $D_{\textit {out}}$ output channels, there exists $\lbrace _{\!\textit {val}}^{(h)}\rbrace _{h \in [N_h]}$ such that $ \operatorname{MHSA}() = \operatorname{Conv}() $ for every $\in ^{W \times H \times D_{\textit {in}}}$. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from (SECREF6) and (SECREF6) such that the effect of the multiple heads becomes more transparent: MHSA() = out+ h [Nh] ((h)) val(h) out[(h-1)Dh + 1:h Dh +1] (h) Note that each head's value matrix $_{\!\textit {val}}^{(h)} \in ^{D_{\textit {in}} \times D_{h}}$ and each block of the projection matrix $_{\textit {out}}$ of dimension $D_h \times D_{\textit {out}}$ are learned. Assuming that $D_h \ge D_{\textit {out}}$, we can replace each pair of matrices by a learned matrix $^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention: MHSA(),: = h [Nh] ( ((h),:) ,: ) (h) + out Due to the conditions of the Lemma, for the $h$-th attention head the attention probability is one when $ = - (h) $ and zero otherwise. The layer's output at pixel $$ is thus equal to MHSA() = h [Nh] - (h),: (h) + out For $K = \sqrt{N_h}$, the above can be seen to be equivalent to a convolutional layer expressed in eq. SECREF8: there is a one to one mapping (implied by map $$) between the matrices $^{(h)}$ for $h = [N_h]$ and the matrices $_{k_1,k_2,:,:}$ for all $(k_1,k_2) \in [K]^2.$ Self-Attention as a Convolutional Layer ::: Proof of Main Theorem ::: Remark about @!START@$D_h$@!END@ and @!START@$D_{\textit {out}}$@!END@. It is frequent in transformer-based architectures to set $D_h~=~D_{\textit {out}}/N_h$, hence $D_h < D_{\textit {out}}$. In that case, $^{(h)}$ can be seen to be of rank $D_{\textit {out}} - D_h$, which does not suffice to express every convolutional layer with $D_{\textit {out}}$ channels. Nevertheless, it can be seen that any $D_h$ out of $D_{\textit {out}}$ outputs of $\operatorname{MHSA}()$ can express the output of any convolutional layer with $D_h$ output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be $\min (D_h, D_{\textit {out}})$. In practice, we advise to concatenate heads of dimension $D_h = D_{\textit {out}}$ instead of splitting the $D_{\textit {out}}$ dimensions among heads to have exact re-parametrization and no “unused” channels. Lemma 2 There exists a relative encoding scheme $\lbrace _{{\delta }}\in ^{D_p}\rbrace _{{{\delta }}\in \mathbb {Z}^2}$ with $D_p \ge 3$ and parameters $_{\!\textit {qry}}, _{\!\textit {key}}, \widehat{}_{\!\textit {key}},$ with $D_p \le D_k$ such that, for every ${{\Delta }}\in \Delta \!\!\!\!\Delta _K$ there exists some vector $$ (conditioned on ${{\Delta }}$) yielding $ (_{,:})_{} = 1 $ if $ - = {{\Delta }}$ and zero, otherwise. We show by construction the existence of a $D_p=3$ dimensional relative encoding scheme yielding the required attention probabilities. As the attention probabilities are independent of the input tensor $$, we set $_{\!\textit {key}}=_{\!\textit {qry}}={0}$ which leaves only the last term of eq:attrel. Setting $\widehat{}_{\!\textit {key}}\in ^{D_k \times D_p}$ to the identity matrix (with appropriate row padding), yields $_{, } = ^{\top } _{{{\delta }}}$ where $\quad {{\delta }}:= - $. Above, we have assumed that $D_p \le D_k$ such that no information from $_{{{\delta }}}$ is lost. Now, suppose that we could write: , = -(- 2 + c) for some constant $c$. In the above expression, the maximum attention score over $_{, :}$ is $-\alpha c$ and it is reached for $_{, }$ with ${{\delta }}= {{\Delta }}$. On the other hand, the $\alpha $ coefficient can be used to scale arbitrarily the difference between $_{,{{\Delta }}}$ and the other attention scores. In this way, for ${{\delta }}= {{\Delta }}$, we have (,:) = e-(- 2+c) ' e-((- ')- 2+c) = e-- 2 e-c ' e-(- ')- 2 e-c = e-- 2 ' e-(- ')- 2 = 1 1 + ' e-(- ')- 2 = 1 and for ${{\delta }}\ne {{\Delta }}$, the equation becomes $ \lim _{\alpha \rightarrow \infty } (_{,:})_{} = 0, $ exactly as needed to satisfy the lemma statement. What remains is to prove that there exist $$ and $\lbrace _{{\delta }}\rbrace _{{{\delta }}\in \mathcal {Z}^2}$ for which eq:isoattdecomposed holds. Expanding the rhs of the equation, we have $ -\alpha (\Vert {{\delta }}- {{\Delta }}\Vert ^2 + c) = -\alpha ( \Vert {{\delta }}\Vert ^2 + \Vert {{\Delta }}\Vert ^2 - 2\langle {{\delta }}, {{\Delta }}\rangle + c )\,. $ Now if we set $ = -\alpha \, (1, -2{{\Delta }}_1, -2{{\Delta }}_2) $ and $ _{{\delta }}= (\Vert {{\delta }}\Vert ^2 , {{\delta }}_1, {{\delta }}_2), $ then which matches eq:isoattdecomposed with $c = -\Vert {{\Delta }}\Vert ^2$ and the proof is concluded. Experiments The aim of this section is to validate the applicability of our theoretical results—which state that self-attention can perform convolution—and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers, when being trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that for both cases, the attention probabilities learned tend to respect the conditions of Lemma UNKREF15, corroborating our hypothesis. Experiments ::: Implementation Details We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by BIBREF9 that combining attention features with convolutional features improves performance on Cifar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier we compare it to the standard ResNet18 BIBREF14 on the CIFAR-10 dataset BIBREF15. In all experiments, we use a $2\times 2$ invertible down-sampling BIBREF16 on the input to reduce the size of the image as storing the attention coefficient tensor requires a large amount of GPU memory. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier. We used the PyTorch library BIBREF17 and based our implementation on PyTorch Transformers. We release our code on Github and all hyper-parameters are in tab:hyper-parameter in the Appendix. Experiments ::: Quadratic Encoding As a first step, we aim to verify that, with the relative position encoding introduced in (SECREF3), attention layers learn to behave like convolutional layers. We train nine attention heads at each layer to be on par with the $3\times 3$ kernels used predominantly by the ResNet architecture. The center of attention of each head $h$ is initialized to $\Delta ^{(h)} \sim \mathcal {N}({0}, 2_2)$. fig:isoduringtraining shows how the initial positions of the heads (different colors) at layer 4 changed during training. We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel. Our intuition that Self-Attention applied to images learn convolutional filter around the queried pixel is then confirmed. fig:isoattentionfinal displays all attention head at each layer of the model at the end of the training. It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position. We also include in the Appendix a plot of the attention positions for a higher number of heads ($N_h=16$), fig:isomanyheads displays both local patterns similar to CNN and long range dependencies. Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space. To verify that our self-attention model performs equally well as a small ResNet (tab:parametersize), in fig:learnedattentionmap we display the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training. The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures. Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS. Experiments ::: Learned Relative Positional Encoding We move on to study the positional encoding used in practice by fully-attentional models on images. We implemented the 2D relative positional encoding scheme used by BIBREF0, BIBREF9: we learn a $\lfloor D_p / 2 \rfloor $ position encoding vector for each row and each column pixel shift. Hence the relative positional encoding of a key pixel at position $$ with a query pixel at position $$ is the concatenation of the row shift embedding ${{\delta }}_1$ and the column shift embedding ${{\delta }}_2$ (where ${{\delta }}= - $). We chose $D_p = D_{\textit {out}} = 400$ in the experiment. We differ from the (unpublished) implementation described by BIBREF0 in the following points: (i) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a $2\times 2$ invertible downsampling layer BIBREF16 at input, (ii) we use $D_h = D_{\textit {out}}$ instead of $D_h = D_{\textit {out}} / N_h$ backed from our theory that the effective number of learned filters is $\min (D_h, D_{\textit {out}})$, (iii) the attention scores are computed using only the relative positions of the pixels and not the data. As seen in tab:parametersize, our implementation achieves accuracy close to that of ResNet18. The attention probabilities of each head at each layer are displayed on fig:learnedattentionmap. The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the encoding from the data, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma UNKREF15 and thus Theorem UNKREF11. At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies. The phenomenon is particularly prominent for layers four to six, where the behavior of self-attention can be seen to deviate from that of convolution. We also notice that vertical symmetry is much more rare in the learned attention probabilities of high layers. This matches the intuition that, for image classification, distinguishing between what is above or below something is more crucial than what is left or right. Finally, the fact that some of the heads in the last two layers seem to be redundant, likely indicating that the computational and space complexity of the model could be amenable to further reduction, for example by pruning. Conclusion We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that learned fully-attentional models do behave similar to CNN in practice. More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters—similar to deformable convolutions BIBREF18, BIBREF19. Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series. Also, though we currently lack the computational resources to do so, we would be interested to test whether our findings are replicated for datasets of larger-scale, such as ImageNet and COCO. Appendix ::: Generalized Quadratic Positional Encoding We noticed the similarity of the attention probabilities in the quadratic positional encoding (sec:attentioncanimplementcnn) to isotropic bivariate Gaussian distributions with bounded support: (, :) = e-(- ) - 2' [W] [H] e-(' - ) - 2 . Building on this observation, we further extended our attention mechanism to non-isotropic Gaussian distribution over pixel positions. Each head is parametrized by a center of attention ${{\Delta }}$ and a covariance matrix $$ to obtain the following attention scores, , = -1 2 (- )-1 (- ) = -1 2 -1 + -1 - 1 2 -1 , where, once more, ${{\delta }}= - $. The last term can be discarded because the softmax is shift invariant and we rewrite the attention coefficient as a dot product between the head target vector $$ and the relative position encoding $_{{{\delta }}}$ (consisting of the first and second order combinations of the shift in pixels ${{\delta }}$): Appendix ::: Generalized Quadratic Positional Encoding ::: Evaluation. We trained our model using this generalized quadratic relative position encoding. We were curious to see if, using the above encoding the self-attention model would learn to attend to non-isotropic groups of pixels—thus forming unseen patterns in CNNs. Each head was parametrized by ${{\Delta }}\in ^2$ and $^{-1/2} \in ^{2\times 2}$ to ensure that the covariance matrix remained positive semi-definite. We initialized the center of attention to ${{\Delta }}^{(h)} \sim \mathcal {N}(\mathbf {0}, 2 _2)$ and $^{-1/2} = _2 + \mathcal {N}(\mathbf {0}, 0.01 _2)$ so that initial attention probabilities were close to an isotropic Gaussian. fig:nonisoattentionfinal shows that the network did learn non-isotropic attention probability patterns, especially in high layers. Nevertheless, the fact that we do not obtain any performance improvement seems to suggest that attention non-isotropy is not particularly helpful in practice—the quadratic positional encoding suffices. Appendix ::: Generalized Quadratic Positional Encoding ::: Pruning degenerated heads. We noticed that certain non-isotropic attention heads attended on “non-intuitive” patches of pixels: either attending a very thin stripe of pixels, when $^{-1}$ was almost singular, or attending all pixels uniformly, when $^{-1}$ was close to $\mathbf {0}$ (i.e. constant attention scores). We asked ourselves, are such attention patterns indeed useful for the model or are these heads degenerated and unused? To find out, we pruned all heads having largest eigen-values smaller than $10^{-5}$ or condition number (ratio of the biggest and smallest eigen-values) greater than $10^5$. Specifically in our model with 6-layer and 9-heads each, we pruned $[2, 4, 1, 2, 6, 0]$ heads from the first to the last layer. This means that these layers cannot express a $3 \times 3$ kernel anymore. As shown in yellow on fig:learningcurve, this ablation initially hurts a bit the performance, probably due to off biases, but after a few epochs of continued training with a smaller learning rate (divided by 10) the accuracy recovers its unpruned value. Hence, without sacrificing performance, we reduce the size of the parameters and the number of FLOPS by a fourth. Appendix ::: Increasing the Number of Heads For completeness, we also tested increasing the number of heads of our architecture from 9 to 16. Similar to Figure FIGREF23, we see that the network distinguishes two main types of attention patterns. Localized heads (i.e., those that attend to nearly individual pixels) appear more frequently in the first few layers. The self-attention layer uses these heads to act in a manner similar to how convolutional layers do. Heads with less-localized attention become more common at higher layers.
constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer
7ac0cec79c8c2b1909b0a1cc0d4646fce09884ee
7ac0cec79c8c2b1909b0a1cc0d4646fce09884ee_0
Q: Is there a way of converting existing convolution layers into self-attention to perform very same convolution? Text: Introduction Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer BIBREF1. Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 BIBREF2, BERT BIBREF3 and Transformer-XL BIBREF4, seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks BIBREF5 and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism—originally introduced in Neural Machine Translation to better handle long-range dependencies BIBREF6. With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest. Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks. Self-attention was first added to CNN by either using channel-based attention BIBREF7 or non-local relationships across the image BIBREF8. More recently, BIBREF9 augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks. Interestingly, BIBREF0 noticed that, even though state-of-the art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy. These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers? From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function—including a CNN. Indeed, BIBREF10 showed that a multi-layer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic. Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open. Introduction ::: Contributions. In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers: From a theoretical perspective, we provide a constructive proof showing that self-attention layers can express any convolutional layers. Specifically, we show that a single multi-head self-attention layer using relative positional encoding can be re-parametrized to express any convolutional layer. Our insights lead to a relative positional encoding, that we refer to as quadratic encoding, that is very efficient in terms of size. Our experiments show that the first few layers of attention-only architectures BIBREF0 do learn to attend on grid-like pattern around each query pixel, similar to our theoretical construction. Strikingly, this behavior is confirmed both for our quadratic encoding, but also for relative encoding that is learned during training. Our results seem to suggest that localized convolution is the right inductive bias for the first few layers of an image classifying network. For deeper layers, on the other hand, long-range as well as horizontally-symmetric inter-dependencies become more relevant. For reproducibility purposes, our code is publicly available on GitHub. Background on Attention Mechanisms for Vision We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings. Background on Attention Mechanisms for Vision ::: The Multi-Head Self-Attention Layer Let $\in ^{T\times D_{\textit {in}}}$ be an input matrix consisting of $T$ tokens in of ${D_{\textit {in}}}$ dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \in [T]$ from $D_{\textit {in}}$ to $D_{\textit {out}}$ dimensions as follows: Self-Attention()t,: := ( t,: ) val, where we refer to the elements of the $T \times T$ matrix := qrykey as attention scores and the softmax output as attention probabilities. The layer is parametrized by a query matrix $_{\!\textit {qry}}\in ^{D_{\textit {in}} \times D_{k}}$, a key matrix $_{\!\textit {key}}\in ^{D_{\textit {in}} \times D_{k}}$ and a value matrix $_{\!\textit {val}}\in ^{D_{\textit {in}} \times D_{\textit {out}}}$.For simplicity, we exclude any residual connections, batch normalization and constant factors. A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention := (+ ) qrykey(+ ), where $\in ^{T \times D_{\textit {in}}}$ contains the embedding vectors for each position. More generally, $$ may be substituted by any function that returns a vector representation of the position. It has been found beneficial in practice to replicate this self-attention mechanism into multiple heads, each being able to focus on different parts of the input by using different query, key and value matrices. In multi-head self-attention, the output of the $N_h$ heads of output dimension $D_h$ are concatenated and projected to dimension $D_{\textit {out}}$ as follows: MHSA() := *concath [Nh][Self-Attentionh()] out + out and two new parameters are introduced: the projection matrix $_{\!\textit {out}} \in ^{N_h D_h \times D_{\textit {out}}}$ and a bias term $_{\textit {out}}\in ^{D_{\textit {out}}}$. Background on Attention Mechanisms for Vision ::: Attention for Images Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor $~\in ~^{W\times H \times D_{\textit {in}}}$ of width $W$, height $H$ and $D_{\textit {in}}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by Conv()i,j,: := (1, 2) K 1,2,:,: i+1, j+2, : + , where $$ is the $K \times K \times D_{\textit {out}} \times D_{\textit {in}}$ weight tensor , $\in ^{D_{\textit {out}}}$ is the bias vector and the set contains all possible shifts appearing when convolving the image with a $K\times K$ kernel. In the following, we review how self-attention can be adapted from 1D sequences to images. With images, rather than tokens, we have query and key pixels $, \in [W] \times [H]$. Accordingly, the input is a tensor $$ of dimension $W \times H \times D_{\textit {in}}$ and each attention score associates a query and a key pixel. To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if $= (i,j)$, we write $_{,:}$ and $_{,:}$ to mean $_{i, j,:}$ and $_{i, j,:,:}$, respectively. With this notation in place, the multi-head self attention layer output at pixel $$ can be expressed as follows: Self-Attention(),: = ( ,: ) ,: val and accordingly for the multi-head case. Background on Attention Mechanisms for Vision ::: Positional Encoding for Images There are two types of positional encoding that has been used in transformer-based architectures: the absolute and relative encoding (see also tab:relworkattention in the Appendix). With absolute encodings, a (fixed or learned) vector $_{,:}$ is assigned to each pixel $$. The computation of the attention scores we saw in eq:attcoeff can then be decomposed as follows: , abs = (,: + ,:) qrykey(,: + ,:) = ,: qrykey,: + ,: qrykey,: + ,:qrykey,: + ,: qrykey,: where $$ and $$ correspond to the query and key pixels, respectively. The relative positional encoding was introduced by BIBREF4. The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel: , rel := ,: qry key ,: + ,: qry key + key ,: + key In this manner, the attention scores only depend on the shift ${{\delta }}:= - $. Above, the learnable vectors $$ and $$ are unique for each head, whereas for every shift ${{\delta }}$ the relative positional encoding $_{{{\delta }}} \in ^{D_p}$ is shared by all layers and heads. Moreover, now the key weights are split into two types: $_{\!\textit {key}}$ pertain to the input and $\widehat{}_{\!\textit {key}}$ to the relative position of pixels. Self-Attention as a Convolutional Layer This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer. Our main result is the following: Theorem 1 A multi-head self-attention layer with $N_h$ heads of dimension $D_h$, output dimension $D_{\textit {out}}$ and a relative positional encoding of dimension $D_p \ge 3$ can express any convolutional layer of kernel size $\sqrt{N_h} \times \sqrt{N_h}$ and $\min (D_h, D_{\textit {out}})$ output channels. The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\Delta \!\!\!\!\Delta _K = \lbrace -\lfloor K/2 \rfloor , \dots , \lfloor K/2 \rfloor \rbrace ^2$ of all pixel shifts in a $K\times K$ kernel. The exact condition can be found in the statement of Lemma UNKREF15. Then, Lemma UNKREF17 shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the quadratic encoding: (h) := -(h) (1, -21(h), -22(h)) := (2 , 1, 2) qry = key := 0 key := The learned parameters ${{\Delta }}^{(h)} = ({{\Delta }}^{(h)}_1, {{\Delta }}^{(h)}_2)$ and $\alpha ^{(h)}$ determine the center and width of attention of each head, respectively. On the other hand, ${{\delta }}= ({{\delta }}_1, {{\delta }}_2)$ is fixed and expresses the relative shift between query and key pixels. It is important to stress that the above encoding is not the only one for which the conditions of Lemma UNKREF15 are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only $D_p = 3$ dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one). Though we lack a formal proof, we conjecture that every encoding that satisfies Lemma UNKREF15 should have at least three dimensions. Self-Attention as a Convolutional Layer ::: Remark for the 1D case. Convolutional layers acting on sequences are commonly used in the literature for text BIBREF11, as well as audio BIBREF12 and time series BIBREF13. Theorem UNKREF11 can be straightforwardly extended to show that multi-head self-attention with $N_h$ heads can also simulate a 1D convolutional layer with a kernel of size $K=N_h$ with $\min (D_h, D_{\textit {out}})$ output channels using a positional encoding of dimension $D_p \ge 2$. Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence—only that it has the capacity to do so. Self-Attention as a Convolutional Layer ::: Proof of Main Theorem The proof follows directly from Lemmas UNKREF15 and UNKREF17 stated below: Lemma 1 Consider a multi-head self-attention layer consisting of $N_h = K^2$ heads, $D_h \ge D_{\textit {out}}$ and let $~:~[N_h]~\rightarrow ~{\Delta \!\!\!\!\Delta }_K$ be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds: ((h),:) = {ll 1 if (h) = - 0 otherwise. . Then, for any convolutional layer with a $K \times K$ kernel and $D_{\textit {out}}$ output channels, there exists $\lbrace _{\!\textit {val}}^{(h)}\rbrace _{h \in [N_h]}$ such that $ \operatorname{MHSA}() = \operatorname{Conv}() $ for every $\in ^{W \times H \times D_{\textit {in}}}$. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from (SECREF6) and (SECREF6) such that the effect of the multiple heads becomes more transparent: MHSA() = out+ h [Nh] ((h)) val(h) out[(h-1)Dh + 1:h Dh +1] (h) Note that each head's value matrix $_{\!\textit {val}}^{(h)} \in ^{D_{\textit {in}} \times D_{h}}$ and each block of the projection matrix $_{\textit {out}}$ of dimension $D_h \times D_{\textit {out}}$ are learned. Assuming that $D_h \ge D_{\textit {out}}$, we can replace each pair of matrices by a learned matrix $^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention: MHSA(),: = h [Nh] ( ((h),:) ,: ) (h) + out Due to the conditions of the Lemma, for the $h$-th attention head the attention probability is one when $ = - (h) $ and zero otherwise. The layer's output at pixel $$ is thus equal to MHSA() = h [Nh] - (h),: (h) + out For $K = \sqrt{N_h}$, the above can be seen to be equivalent to a convolutional layer expressed in eq. SECREF8: there is a one to one mapping (implied by map $$) between the matrices $^{(h)}$ for $h = [N_h]$ and the matrices $_{k_1,k_2,:,:}$ for all $(k_1,k_2) \in [K]^2.$ Self-Attention as a Convolutional Layer ::: Proof of Main Theorem ::: Remark about @!START@$D_h$@!END@ and @!START@$D_{\textit {out}}$@!END@. It is frequent in transformer-based architectures to set $D_h~=~D_{\textit {out}}/N_h$, hence $D_h < D_{\textit {out}}$. In that case, $^{(h)}$ can be seen to be of rank $D_{\textit {out}} - D_h$, which does not suffice to express every convolutional layer with $D_{\textit {out}}$ channels. Nevertheless, it can be seen that any $D_h$ out of $D_{\textit {out}}$ outputs of $\operatorname{MHSA}()$ can express the output of any convolutional layer with $D_h$ output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be $\min (D_h, D_{\textit {out}})$. In practice, we advise to concatenate heads of dimension $D_h = D_{\textit {out}}$ instead of splitting the $D_{\textit {out}}$ dimensions among heads to have exact re-parametrization and no “unused” channels. Lemma 2 There exists a relative encoding scheme $\lbrace _{{\delta }}\in ^{D_p}\rbrace _{{{\delta }}\in \mathbb {Z}^2}$ with $D_p \ge 3$ and parameters $_{\!\textit {qry}}, _{\!\textit {key}}, \widehat{}_{\!\textit {key}},$ with $D_p \le D_k$ such that, for every ${{\Delta }}\in \Delta \!\!\!\!\Delta _K$ there exists some vector $$ (conditioned on ${{\Delta }}$) yielding $ (_{,:})_{} = 1 $ if $ - = {{\Delta }}$ and zero, otherwise. We show by construction the existence of a $D_p=3$ dimensional relative encoding scheme yielding the required attention probabilities. As the attention probabilities are independent of the input tensor $$, we set $_{\!\textit {key}}=_{\!\textit {qry}}={0}$ which leaves only the last term of eq:attrel. Setting $\widehat{}_{\!\textit {key}}\in ^{D_k \times D_p}$ to the identity matrix (with appropriate row padding), yields $_{, } = ^{\top } _{{{\delta }}}$ where $\quad {{\delta }}:= - $. Above, we have assumed that $D_p \le D_k$ such that no information from $_{{{\delta }}}$ is lost. Now, suppose that we could write: , = -(- 2 + c) for some constant $c$. In the above expression, the maximum attention score over $_{, :}$ is $-\alpha c$ and it is reached for $_{, }$ with ${{\delta }}= {{\Delta }}$. On the other hand, the $\alpha $ coefficient can be used to scale arbitrarily the difference between $_{,{{\Delta }}}$ and the other attention scores. In this way, for ${{\delta }}= {{\Delta }}$, we have (,:) = e-(- 2+c) ' e-((- ')- 2+c) = e-- 2 e-c ' e-(- ')- 2 e-c = e-- 2 ' e-(- ')- 2 = 1 1 + ' e-(- ')- 2 = 1 and for ${{\delta }}\ne {{\Delta }}$, the equation becomes $ \lim _{\alpha \rightarrow \infty } (_{,:})_{} = 0, $ exactly as needed to satisfy the lemma statement. What remains is to prove that there exist $$ and $\lbrace _{{\delta }}\rbrace _{{{\delta }}\in \mathcal {Z}^2}$ for which eq:isoattdecomposed holds. Expanding the rhs of the equation, we have $ -\alpha (\Vert {{\delta }}- {{\Delta }}\Vert ^2 + c) = -\alpha ( \Vert {{\delta }}\Vert ^2 + \Vert {{\Delta }}\Vert ^2 - 2\langle {{\delta }}, {{\Delta }}\rangle + c )\,. $ Now if we set $ = -\alpha \, (1, -2{{\Delta }}_1, -2{{\Delta }}_2) $ and $ _{{\delta }}= (\Vert {{\delta }}\Vert ^2 , {{\delta }}_1, {{\delta }}_2), $ then which matches eq:isoattdecomposed with $c = -\Vert {{\Delta }}\Vert ^2$ and the proof is concluded. Experiments The aim of this section is to validate the applicability of our theoretical results—which state that self-attention can perform convolution—and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers, when being trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that for both cases, the attention probabilities learned tend to respect the conditions of Lemma UNKREF15, corroborating our hypothesis. Experiments ::: Implementation Details We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by BIBREF9 that combining attention features with convolutional features improves performance on Cifar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier we compare it to the standard ResNet18 BIBREF14 on the CIFAR-10 dataset BIBREF15. In all experiments, we use a $2\times 2$ invertible down-sampling BIBREF16 on the input to reduce the size of the image as storing the attention coefficient tensor requires a large amount of GPU memory. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier. We used the PyTorch library BIBREF17 and based our implementation on PyTorch Transformers. We release our code on Github and all hyper-parameters are in tab:hyper-parameter in the Appendix. Experiments ::: Quadratic Encoding As a first step, we aim to verify that, with the relative position encoding introduced in (SECREF3), attention layers learn to behave like convolutional layers. We train nine attention heads at each layer to be on par with the $3\times 3$ kernels used predominantly by the ResNet architecture. The center of attention of each head $h$ is initialized to $\Delta ^{(h)} \sim \mathcal {N}({0}, 2_2)$. fig:isoduringtraining shows how the initial positions of the heads (different colors) at layer 4 changed during training. We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel. Our intuition that Self-Attention applied to images learn convolutional filter around the queried pixel is then confirmed. fig:isoattentionfinal displays all attention head at each layer of the model at the end of the training. It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position. We also include in the Appendix a plot of the attention positions for a higher number of heads ($N_h=16$), fig:isomanyheads displays both local patterns similar to CNN and long range dependencies. Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space. To verify that our self-attention model performs equally well as a small ResNet (tab:parametersize), in fig:learnedattentionmap we display the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training. The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures. Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS. Experiments ::: Learned Relative Positional Encoding We move on to study the positional encoding used in practice by fully-attentional models on images. We implemented the 2D relative positional encoding scheme used by BIBREF0, BIBREF9: we learn a $\lfloor D_p / 2 \rfloor $ position encoding vector for each row and each column pixel shift. Hence the relative positional encoding of a key pixel at position $$ with a query pixel at position $$ is the concatenation of the row shift embedding ${{\delta }}_1$ and the column shift embedding ${{\delta }}_2$ (where ${{\delta }}= - $). We chose $D_p = D_{\textit {out}} = 400$ in the experiment. We differ from the (unpublished) implementation described by BIBREF0 in the following points: (i) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a $2\times 2$ invertible downsampling layer BIBREF16 at input, (ii) we use $D_h = D_{\textit {out}}$ instead of $D_h = D_{\textit {out}} / N_h$ backed from our theory that the effective number of learned filters is $\min (D_h, D_{\textit {out}})$, (iii) the attention scores are computed using only the relative positions of the pixels and not the data. As seen in tab:parametersize, our implementation achieves accuracy close to that of ResNet18. The attention probabilities of each head at each layer are displayed on fig:learnedattentionmap. The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the encoding from the data, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma UNKREF15 and thus Theorem UNKREF11. At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies. The phenomenon is particularly prominent for layers four to six, where the behavior of self-attention can be seen to deviate from that of convolution. We also notice that vertical symmetry is much more rare in the learned attention probabilities of high layers. This matches the intuition that, for image classification, distinguishing between what is above or below something is more crucial than what is left or right. Finally, the fact that some of the heads in the last two layers seem to be redundant, likely indicating that the computational and space complexity of the model could be amenable to further reduction, for example by pruning. Conclusion We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that learned fully-attentional models do behave similar to CNN in practice. More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters—similar to deformable convolutions BIBREF18, BIBREF19. Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series. Also, though we currently lack the computational resources to do so, we would be interested to test whether our findings are replicated for datasets of larger-scale, such as ImageNet and COCO. Appendix ::: Generalized Quadratic Positional Encoding We noticed the similarity of the attention probabilities in the quadratic positional encoding (sec:attentioncanimplementcnn) to isotropic bivariate Gaussian distributions with bounded support: (, :) = e-(- ) - 2' [W] [H] e-(' - ) - 2 . Building on this observation, we further extended our attention mechanism to non-isotropic Gaussian distribution over pixel positions. Each head is parametrized by a center of attention ${{\Delta }}$ and a covariance matrix $$ to obtain the following attention scores, , = -1 2 (- )-1 (- ) = -1 2 -1 + -1 - 1 2 -1 , where, once more, ${{\delta }}= - $. The last term can be discarded because the softmax is shift invariant and we rewrite the attention coefficient as a dot product between the head target vector $$ and the relative position encoding $_{{{\delta }}}$ (consisting of the first and second order combinations of the shift in pixels ${{\delta }}$): Appendix ::: Generalized Quadratic Positional Encoding ::: Evaluation. We trained our model using this generalized quadratic relative position encoding. We were curious to see if, using the above encoding the self-attention model would learn to attend to non-isotropic groups of pixels—thus forming unseen patterns in CNNs. Each head was parametrized by ${{\Delta }}\in ^2$ and $^{-1/2} \in ^{2\times 2}$ to ensure that the covariance matrix remained positive semi-definite. We initialized the center of attention to ${{\Delta }}^{(h)} \sim \mathcal {N}(\mathbf {0}, 2 _2)$ and $^{-1/2} = _2 + \mathcal {N}(\mathbf {0}, 0.01 _2)$ so that initial attention probabilities were close to an isotropic Gaussian. fig:nonisoattentionfinal shows that the network did learn non-isotropic attention probability patterns, especially in high layers. Nevertheless, the fact that we do not obtain any performance improvement seems to suggest that attention non-isotropy is not particularly helpful in practice—the quadratic positional encoding suffices. Appendix ::: Generalized Quadratic Positional Encoding ::: Pruning degenerated heads. We noticed that certain non-isotropic attention heads attended on “non-intuitive” patches of pixels: either attending a very thin stripe of pixels, when $^{-1}$ was almost singular, or attending all pixels uniformly, when $^{-1}$ was close to $\mathbf {0}$ (i.e. constant attention scores). We asked ourselves, are such attention patterns indeed useful for the model or are these heads degenerated and unused? To find out, we pruned all heads having largest eigen-values smaller than $10^{-5}$ or condition number (ratio of the biggest and smallest eigen-values) greater than $10^5$. Specifically in our model with 6-layer and 9-heads each, we pruned $[2, 4, 1, 2, 6, 0]$ heads from the first to the last layer. This means that these layers cannot express a $3 \times 3$ kernel anymore. As shown in yellow on fig:learningcurve, this ablation initially hurts a bit the performance, probably due to off biases, but after a few epochs of continued training with a smaller learning rate (divided by 10) the accuracy recovers its unpruned value. Hence, without sacrificing performance, we reduce the size of the parameters and the number of FLOPS by a fourth. Appendix ::: Increasing the Number of Heads For completeness, we also tested increasing the number of heads of our architecture from 9 to 16. Similar to Figure FIGREF23, we see that the network distinguishes two main types of attention patterns. Localized heads (i.e., those that attend to nearly individual pixels) appear more frequently in the first few layers. The self-attention layer uses these heads to act in a manner similar to how convolutional layers do. Heads with less-localized attention become more common at higher layers.
Unanswerable
6fd07f4dc037a82c8fa0ed80469eb4171dcebf12
6fd07f4dc037a82c8fa0ed80469eb4171dcebf12_0
Q: What authors mean by sufficient number of heads? Text: Introduction Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer BIBREF1. Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 BIBREF2, BERT BIBREF3 and Transformer-XL BIBREF4, seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks BIBREF5 and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism—originally introduced in Neural Machine Translation to better handle long-range dependencies BIBREF6. With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest. Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks. Self-attention was first added to CNN by either using channel-based attention BIBREF7 or non-local relationships across the image BIBREF8. More recently, BIBREF9 augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks. Interestingly, BIBREF0 noticed that, even though state-of-the art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy. These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers? From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function—including a CNN. Indeed, BIBREF10 showed that a multi-layer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic. Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open. Introduction ::: Contributions. In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers: From a theoretical perspective, we provide a constructive proof showing that self-attention layers can express any convolutional layers. Specifically, we show that a single multi-head self-attention layer using relative positional encoding can be re-parametrized to express any convolutional layer. Our insights lead to a relative positional encoding, that we refer to as quadratic encoding, that is very efficient in terms of size. Our experiments show that the first few layers of attention-only architectures BIBREF0 do learn to attend on grid-like pattern around each query pixel, similar to our theoretical construction. Strikingly, this behavior is confirmed both for our quadratic encoding, but also for relative encoding that is learned during training. Our results seem to suggest that localized convolution is the right inductive bias for the first few layers of an image classifying network. For deeper layers, on the other hand, long-range as well as horizontally-symmetric inter-dependencies become more relevant. For reproducibility purposes, our code is publicly available on GitHub. Background on Attention Mechanisms for Vision We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings. Background on Attention Mechanisms for Vision ::: The Multi-Head Self-Attention Layer Let $\in ^{T\times D_{\textit {in}}}$ be an input matrix consisting of $T$ tokens in of ${D_{\textit {in}}}$ dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \in [T]$ from $D_{\textit {in}}$ to $D_{\textit {out}}$ dimensions as follows: Self-Attention()t,: := ( t,: ) val, where we refer to the elements of the $T \times T$ matrix := qrykey as attention scores and the softmax output as attention probabilities. The layer is parametrized by a query matrix $_{\!\textit {qry}}\in ^{D_{\textit {in}} \times D_{k}}$, a key matrix $_{\!\textit {key}}\in ^{D_{\textit {in}} \times D_{k}}$ and a value matrix $_{\!\textit {val}}\in ^{D_{\textit {in}} \times D_{\textit {out}}}$.For simplicity, we exclude any residual connections, batch normalization and constant factors. A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention := (+ ) qrykey(+ ), where $\in ^{T \times D_{\textit {in}}}$ contains the embedding vectors for each position. More generally, $$ may be substituted by any function that returns a vector representation of the position. It has been found beneficial in practice to replicate this self-attention mechanism into multiple heads, each being able to focus on different parts of the input by using different query, key and value matrices. In multi-head self-attention, the output of the $N_h$ heads of output dimension $D_h$ are concatenated and projected to dimension $D_{\textit {out}}$ as follows: MHSA() := *concath [Nh][Self-Attentionh()] out + out and two new parameters are introduced: the projection matrix $_{\!\textit {out}} \in ^{N_h D_h \times D_{\textit {out}}}$ and a bias term $_{\textit {out}}\in ^{D_{\textit {out}}}$. Background on Attention Mechanisms for Vision ::: Attention for Images Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor $~\in ~^{W\times H \times D_{\textit {in}}}$ of width $W$, height $H$ and $D_{\textit {in}}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by Conv()i,j,: := (1, 2) K 1,2,:,: i+1, j+2, : + , where $$ is the $K \times K \times D_{\textit {out}} \times D_{\textit {in}}$ weight tensor , $\in ^{D_{\textit {out}}}$ is the bias vector and the set contains all possible shifts appearing when convolving the image with a $K\times K$ kernel. In the following, we review how self-attention can be adapted from 1D sequences to images. With images, rather than tokens, we have query and key pixels $, \in [W] \times [H]$. Accordingly, the input is a tensor $$ of dimension $W \times H \times D_{\textit {in}}$ and each attention score associates a query and a key pixel. To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if $= (i,j)$, we write $_{,:}$ and $_{,:}$ to mean $_{i, j,:}$ and $_{i, j,:,:}$, respectively. With this notation in place, the multi-head self attention layer output at pixel $$ can be expressed as follows: Self-Attention(),: = ( ,: ) ,: val and accordingly for the multi-head case. Background on Attention Mechanisms for Vision ::: Positional Encoding for Images There are two types of positional encoding that has been used in transformer-based architectures: the absolute and relative encoding (see also tab:relworkattention in the Appendix). With absolute encodings, a (fixed or learned) vector $_{,:}$ is assigned to each pixel $$. The computation of the attention scores we saw in eq:attcoeff can then be decomposed as follows: , abs = (,: + ,:) qrykey(,: + ,:) = ,: qrykey,: + ,: qrykey,: + ,:qrykey,: + ,: qrykey,: where $$ and $$ correspond to the query and key pixels, respectively. The relative positional encoding was introduced by BIBREF4. The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel: , rel := ,: qry key ,: + ,: qry key + key ,: + key In this manner, the attention scores only depend on the shift ${{\delta }}:= - $. Above, the learnable vectors $$ and $$ are unique for each head, whereas for every shift ${{\delta }}$ the relative positional encoding $_{{{\delta }}} \in ^{D_p}$ is shared by all layers and heads. Moreover, now the key weights are split into two types: $_{\!\textit {key}}$ pertain to the input and $\widehat{}_{\!\textit {key}}$ to the relative position of pixels. Self-Attention as a Convolutional Layer This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer. Our main result is the following: Theorem 1 A multi-head self-attention layer with $N_h$ heads of dimension $D_h$, output dimension $D_{\textit {out}}$ and a relative positional encoding of dimension $D_p \ge 3$ can express any convolutional layer of kernel size $\sqrt{N_h} \times \sqrt{N_h}$ and $\min (D_h, D_{\textit {out}})$ output channels. The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\Delta \!\!\!\!\Delta _K = \lbrace -\lfloor K/2 \rfloor , \dots , \lfloor K/2 \rfloor \rbrace ^2$ of all pixel shifts in a $K\times K$ kernel. The exact condition can be found in the statement of Lemma UNKREF15. Then, Lemma UNKREF17 shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the quadratic encoding: (h) := -(h) (1, -21(h), -22(h)) := (2 , 1, 2) qry = key := 0 key := The learned parameters ${{\Delta }}^{(h)} = ({{\Delta }}^{(h)}_1, {{\Delta }}^{(h)}_2)$ and $\alpha ^{(h)}$ determine the center and width of attention of each head, respectively. On the other hand, ${{\delta }}= ({{\delta }}_1, {{\delta }}_2)$ is fixed and expresses the relative shift between query and key pixels. It is important to stress that the above encoding is not the only one for which the conditions of Lemma UNKREF15 are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only $D_p = 3$ dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one). Though we lack a formal proof, we conjecture that every encoding that satisfies Lemma UNKREF15 should have at least three dimensions. Self-Attention as a Convolutional Layer ::: Remark for the 1D case. Convolutional layers acting on sequences are commonly used in the literature for text BIBREF11, as well as audio BIBREF12 and time series BIBREF13. Theorem UNKREF11 can be straightforwardly extended to show that multi-head self-attention with $N_h$ heads can also simulate a 1D convolutional layer with a kernel of size $K=N_h$ with $\min (D_h, D_{\textit {out}})$ output channels using a positional encoding of dimension $D_p \ge 2$. Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence—only that it has the capacity to do so. Self-Attention as a Convolutional Layer ::: Proof of Main Theorem The proof follows directly from Lemmas UNKREF15 and UNKREF17 stated below: Lemma 1 Consider a multi-head self-attention layer consisting of $N_h = K^2$ heads, $D_h \ge D_{\textit {out}}$ and let $~:~[N_h]~\rightarrow ~{\Delta \!\!\!\!\Delta }_K$ be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds: ((h),:) = {ll 1 if (h) = - 0 otherwise. . Then, for any convolutional layer with a $K \times K$ kernel and $D_{\textit {out}}$ output channels, there exists $\lbrace _{\!\textit {val}}^{(h)}\rbrace _{h \in [N_h]}$ such that $ \operatorname{MHSA}() = \operatorname{Conv}() $ for every $\in ^{W \times H \times D_{\textit {in}}}$. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from (SECREF6) and (SECREF6) such that the effect of the multiple heads becomes more transparent: MHSA() = out+ h [Nh] ((h)) val(h) out[(h-1)Dh + 1:h Dh +1] (h) Note that each head's value matrix $_{\!\textit {val}}^{(h)} \in ^{D_{\textit {in}} \times D_{h}}$ and each block of the projection matrix $_{\textit {out}}$ of dimension $D_h \times D_{\textit {out}}$ are learned. Assuming that $D_h \ge D_{\textit {out}}$, we can replace each pair of matrices by a learned matrix $^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention: MHSA(),: = h [Nh] ( ((h),:) ,: ) (h) + out Due to the conditions of the Lemma, for the $h$-th attention head the attention probability is one when $ = - (h) $ and zero otherwise. The layer's output at pixel $$ is thus equal to MHSA() = h [Nh] - (h),: (h) + out For $K = \sqrt{N_h}$, the above can be seen to be equivalent to a convolutional layer expressed in eq. SECREF8: there is a one to one mapping (implied by map $$) between the matrices $^{(h)}$ for $h = [N_h]$ and the matrices $_{k_1,k_2,:,:}$ for all $(k_1,k_2) \in [K]^2.$ Self-Attention as a Convolutional Layer ::: Proof of Main Theorem ::: Remark about @!START@$D_h$@!END@ and @!START@$D_{\textit {out}}$@!END@. It is frequent in transformer-based architectures to set $D_h~=~D_{\textit {out}}/N_h$, hence $D_h < D_{\textit {out}}$. In that case, $^{(h)}$ can be seen to be of rank $D_{\textit {out}} - D_h$, which does not suffice to express every convolutional layer with $D_{\textit {out}}$ channels. Nevertheless, it can be seen that any $D_h$ out of $D_{\textit {out}}$ outputs of $\operatorname{MHSA}()$ can express the output of any convolutional layer with $D_h$ output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be $\min (D_h, D_{\textit {out}})$. In practice, we advise to concatenate heads of dimension $D_h = D_{\textit {out}}$ instead of splitting the $D_{\textit {out}}$ dimensions among heads to have exact re-parametrization and no “unused” channels. Lemma 2 There exists a relative encoding scheme $\lbrace _{{\delta }}\in ^{D_p}\rbrace _{{{\delta }}\in \mathbb {Z}^2}$ with $D_p \ge 3$ and parameters $_{\!\textit {qry}}, _{\!\textit {key}}, \widehat{}_{\!\textit {key}},$ with $D_p \le D_k$ such that, for every ${{\Delta }}\in \Delta \!\!\!\!\Delta _K$ there exists some vector $$ (conditioned on ${{\Delta }}$) yielding $ (_{,:})_{} = 1 $ if $ - = {{\Delta }}$ and zero, otherwise. We show by construction the existence of a $D_p=3$ dimensional relative encoding scheme yielding the required attention probabilities. As the attention probabilities are independent of the input tensor $$, we set $_{\!\textit {key}}=_{\!\textit {qry}}={0}$ which leaves only the last term of eq:attrel. Setting $\widehat{}_{\!\textit {key}}\in ^{D_k \times D_p}$ to the identity matrix (with appropriate row padding), yields $_{, } = ^{\top } _{{{\delta }}}$ where $\quad {{\delta }}:= - $. Above, we have assumed that $D_p \le D_k$ such that no information from $_{{{\delta }}}$ is lost. Now, suppose that we could write: , = -(- 2 + c) for some constant $c$. In the above expression, the maximum attention score over $_{, :}$ is $-\alpha c$ and it is reached for $_{, }$ with ${{\delta }}= {{\Delta }}$. On the other hand, the $\alpha $ coefficient can be used to scale arbitrarily the difference between $_{,{{\Delta }}}$ and the other attention scores. In this way, for ${{\delta }}= {{\Delta }}$, we have (,:) = e-(- 2+c) ' e-((- ')- 2+c) = e-- 2 e-c ' e-(- ')- 2 e-c = e-- 2 ' e-(- ')- 2 = 1 1 + ' e-(- ')- 2 = 1 and for ${{\delta }}\ne {{\Delta }}$, the equation becomes $ \lim _{\alpha \rightarrow \infty } (_{,:})_{} = 0, $ exactly as needed to satisfy the lemma statement. What remains is to prove that there exist $$ and $\lbrace _{{\delta }}\rbrace _{{{\delta }}\in \mathcal {Z}^2}$ for which eq:isoattdecomposed holds. Expanding the rhs of the equation, we have $ -\alpha (\Vert {{\delta }}- {{\Delta }}\Vert ^2 + c) = -\alpha ( \Vert {{\delta }}\Vert ^2 + \Vert {{\Delta }}\Vert ^2 - 2\langle {{\delta }}, {{\Delta }}\rangle + c )\,. $ Now if we set $ = -\alpha \, (1, -2{{\Delta }}_1, -2{{\Delta }}_2) $ and $ _{{\delta }}= (\Vert {{\delta }}\Vert ^2 , {{\delta }}_1, {{\delta }}_2), $ then which matches eq:isoattdecomposed with $c = -\Vert {{\Delta }}\Vert ^2$ and the proof is concluded. Experiments The aim of this section is to validate the applicability of our theoretical results—which state that self-attention can perform convolution—and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers, when being trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that for both cases, the attention probabilities learned tend to respect the conditions of Lemma UNKREF15, corroborating our hypothesis. Experiments ::: Implementation Details We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by BIBREF9 that combining attention features with convolutional features improves performance on Cifar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier we compare it to the standard ResNet18 BIBREF14 on the CIFAR-10 dataset BIBREF15. In all experiments, we use a $2\times 2$ invertible down-sampling BIBREF16 on the input to reduce the size of the image as storing the attention coefficient tensor requires a large amount of GPU memory. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier. We used the PyTorch library BIBREF17 and based our implementation on PyTorch Transformers. We release our code on Github and all hyper-parameters are in tab:hyper-parameter in the Appendix. Experiments ::: Quadratic Encoding As a first step, we aim to verify that, with the relative position encoding introduced in (SECREF3), attention layers learn to behave like convolutional layers. We train nine attention heads at each layer to be on par with the $3\times 3$ kernels used predominantly by the ResNet architecture. The center of attention of each head $h$ is initialized to $\Delta ^{(h)} \sim \mathcal {N}({0}, 2_2)$. fig:isoduringtraining shows how the initial positions of the heads (different colors) at layer 4 changed during training. We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel. Our intuition that Self-Attention applied to images learn convolutional filter around the queried pixel is then confirmed. fig:isoattentionfinal displays all attention head at each layer of the model at the end of the training. It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position. We also include in the Appendix a plot of the attention positions for a higher number of heads ($N_h=16$), fig:isomanyheads displays both local patterns similar to CNN and long range dependencies. Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space. To verify that our self-attention model performs equally well as a small ResNet (tab:parametersize), in fig:learnedattentionmap we display the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training. The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures. Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS. Experiments ::: Learned Relative Positional Encoding We move on to study the positional encoding used in practice by fully-attentional models on images. We implemented the 2D relative positional encoding scheme used by BIBREF0, BIBREF9: we learn a $\lfloor D_p / 2 \rfloor $ position encoding vector for each row and each column pixel shift. Hence the relative positional encoding of a key pixel at position $$ with a query pixel at position $$ is the concatenation of the row shift embedding ${{\delta }}_1$ and the column shift embedding ${{\delta }}_2$ (where ${{\delta }}= - $). We chose $D_p = D_{\textit {out}} = 400$ in the experiment. We differ from the (unpublished) implementation described by BIBREF0 in the following points: (i) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a $2\times 2$ invertible downsampling layer BIBREF16 at input, (ii) we use $D_h = D_{\textit {out}}$ instead of $D_h = D_{\textit {out}} / N_h$ backed from our theory that the effective number of learned filters is $\min (D_h, D_{\textit {out}})$, (iii) the attention scores are computed using only the relative positions of the pixels and not the data. As seen in tab:parametersize, our implementation achieves accuracy close to that of ResNet18. The attention probabilities of each head at each layer are displayed on fig:learnedattentionmap. The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the encoding from the data, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma UNKREF15 and thus Theorem UNKREF11. At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies. The phenomenon is particularly prominent for layers four to six, where the behavior of self-attention can be seen to deviate from that of convolution. We also notice that vertical symmetry is much more rare in the learned attention probabilities of high layers. This matches the intuition that, for image classification, distinguishing between what is above or below something is more crucial than what is left or right. Finally, the fact that some of the heads in the last two layers seem to be redundant, likely indicating that the computational and space complexity of the model could be amenable to further reduction, for example by pruning. Conclusion We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that learned fully-attentional models do behave similar to CNN in practice. More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters—similar to deformable convolutions BIBREF18, BIBREF19. Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series. Also, though we currently lack the computational resources to do so, we would be interested to test whether our findings are replicated for datasets of larger-scale, such as ImageNet and COCO. Appendix ::: Generalized Quadratic Positional Encoding We noticed the similarity of the attention probabilities in the quadratic positional encoding (sec:attentioncanimplementcnn) to isotropic bivariate Gaussian distributions with bounded support: (, :) = e-(- ) - 2' [W] [H] e-(' - ) - 2 . Building on this observation, we further extended our attention mechanism to non-isotropic Gaussian distribution over pixel positions. Each head is parametrized by a center of attention ${{\Delta }}$ and a covariance matrix $$ to obtain the following attention scores, , = -1 2 (- )-1 (- ) = -1 2 -1 + -1 - 1 2 -1 , where, once more, ${{\delta }}= - $. The last term can be discarded because the softmax is shift invariant and we rewrite the attention coefficient as a dot product between the head target vector $$ and the relative position encoding $_{{{\delta }}}$ (consisting of the first and second order combinations of the shift in pixels ${{\delta }}$): Appendix ::: Generalized Quadratic Positional Encoding ::: Evaluation. We trained our model using this generalized quadratic relative position encoding. We were curious to see if, using the above encoding the self-attention model would learn to attend to non-isotropic groups of pixels—thus forming unseen patterns in CNNs. Each head was parametrized by ${{\Delta }}\in ^2$ and $^{-1/2} \in ^{2\times 2}$ to ensure that the covariance matrix remained positive semi-definite. We initialized the center of attention to ${{\Delta }}^{(h)} \sim \mathcal {N}(\mathbf {0}, 2 _2)$ and $^{-1/2} = _2 + \mathcal {N}(\mathbf {0}, 0.01 _2)$ so that initial attention probabilities were close to an isotropic Gaussian. fig:nonisoattentionfinal shows that the network did learn non-isotropic attention probability patterns, especially in high layers. Nevertheless, the fact that we do not obtain any performance improvement seems to suggest that attention non-isotropy is not particularly helpful in practice—the quadratic positional encoding suffices. Appendix ::: Generalized Quadratic Positional Encoding ::: Pruning degenerated heads. We noticed that certain non-isotropic attention heads attended on “non-intuitive” patches of pixels: either attending a very thin stripe of pixels, when $^{-1}$ was almost singular, or attending all pixels uniformly, when $^{-1}$ was close to $\mathbf {0}$ (i.e. constant attention scores). We asked ourselves, are such attention patterns indeed useful for the model or are these heads degenerated and unused? To find out, we pruned all heads having largest eigen-values smaller than $10^{-5}$ or condition number (ratio of the biggest and smallest eigen-values) greater than $10^5$. Specifically in our model with 6-layer and 9-heads each, we pruned $[2, 4, 1, 2, 6, 0]$ heads from the first to the last layer. This means that these layers cannot express a $3 \times 3$ kernel anymore. As shown in yellow on fig:learningcurve, this ablation initially hurts a bit the performance, probably due to off biases, but after a few epochs of continued training with a smaller learning rate (divided by 10) the accuracy recovers its unpruned value. Hence, without sacrificing performance, we reduce the size of the parameters and the number of FLOPS by a fourth. Appendix ::: Increasing the Number of Heads For completeness, we also tested increasing the number of heads of our architecture from 9 to 16. Similar to Figure FIGREF23, we see that the network distinguishes two main types of attention patterns. Localized heads (i.e., those that attend to nearly individual pixels) appear more frequently in the first few layers. The self-attention layer uses these heads to act in a manner similar to how convolutional layers do. Heads with less-localized attention become more common at higher layers.
Unanswerable
2caa8726222237af482e170c51c88099cefef6fc
2caa8726222237af482e170c51c88099cefef6fc_0
Q: Is there any nonnumerical experiment that also support author's claim, like analysis of attention layers in publicly available networks? Text: Introduction Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer BIBREF1. Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 BIBREF2, BERT BIBREF3 and Transformer-XL BIBREF4, seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks BIBREF5 and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism—originally introduced in Neural Machine Translation to better handle long-range dependencies BIBREF6. With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest. Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks. Self-attention was first added to CNN by either using channel-based attention BIBREF7 or non-local relationships across the image BIBREF8. More recently, BIBREF9 augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks. Interestingly, BIBREF0 noticed that, even though state-of-the art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy. These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers? From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function—including a CNN. Indeed, BIBREF10 showed that a multi-layer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic. Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open. Introduction ::: Contributions. In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers: From a theoretical perspective, we provide a constructive proof showing that self-attention layers can express any convolutional layers. Specifically, we show that a single multi-head self-attention layer using relative positional encoding can be re-parametrized to express any convolutional layer. Our insights lead to a relative positional encoding, that we refer to as quadratic encoding, that is very efficient in terms of size. Our experiments show that the first few layers of attention-only architectures BIBREF0 do learn to attend on grid-like pattern around each query pixel, similar to our theoretical construction. Strikingly, this behavior is confirmed both for our quadratic encoding, but also for relative encoding that is learned during training. Our results seem to suggest that localized convolution is the right inductive bias for the first few layers of an image classifying network. For deeper layers, on the other hand, long-range as well as horizontally-symmetric inter-dependencies become more relevant. For reproducibility purposes, our code is publicly available on GitHub. Background on Attention Mechanisms for Vision We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings. Background on Attention Mechanisms for Vision ::: The Multi-Head Self-Attention Layer Let $\in ^{T\times D_{\textit {in}}}$ be an input matrix consisting of $T$ tokens in of ${D_{\textit {in}}}$ dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \in [T]$ from $D_{\textit {in}}$ to $D_{\textit {out}}$ dimensions as follows: Self-Attention()t,: := ( t,: ) val, where we refer to the elements of the $T \times T$ matrix := qrykey as attention scores and the softmax output as attention probabilities. The layer is parametrized by a query matrix $_{\!\textit {qry}}\in ^{D_{\textit {in}} \times D_{k}}$, a key matrix $_{\!\textit {key}}\in ^{D_{\textit {in}} \times D_{k}}$ and a value matrix $_{\!\textit {val}}\in ^{D_{\textit {in}} \times D_{\textit {out}}}$.For simplicity, we exclude any residual connections, batch normalization and constant factors. A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention := (+ ) qrykey(+ ), where $\in ^{T \times D_{\textit {in}}}$ contains the embedding vectors for each position. More generally, $$ may be substituted by any function that returns a vector representation of the position. It has been found beneficial in practice to replicate this self-attention mechanism into multiple heads, each being able to focus on different parts of the input by using different query, key and value matrices. In multi-head self-attention, the output of the $N_h$ heads of output dimension $D_h$ are concatenated and projected to dimension $D_{\textit {out}}$ as follows: MHSA() := *concath [Nh][Self-Attentionh()] out + out and two new parameters are introduced: the projection matrix $_{\!\textit {out}} \in ^{N_h D_h \times D_{\textit {out}}}$ and a bias term $_{\textit {out}}\in ^{D_{\textit {out}}}$. Background on Attention Mechanisms for Vision ::: Attention for Images Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor $~\in ~^{W\times H \times D_{\textit {in}}}$ of width $W$, height $H$ and $D_{\textit {in}}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by Conv()i,j,: := (1, 2) K 1,2,:,: i+1, j+2, : + , where $$ is the $K \times K \times D_{\textit {out}} \times D_{\textit {in}}$ weight tensor , $\in ^{D_{\textit {out}}}$ is the bias vector and the set contains all possible shifts appearing when convolving the image with a $K\times K$ kernel. In the following, we review how self-attention can be adapted from 1D sequences to images. With images, rather than tokens, we have query and key pixels $, \in [W] \times [H]$. Accordingly, the input is a tensor $$ of dimension $W \times H \times D_{\textit {in}}$ and each attention score associates a query and a key pixel. To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if $= (i,j)$, we write $_{,:}$ and $_{,:}$ to mean $_{i, j,:}$ and $_{i, j,:,:}$, respectively. With this notation in place, the multi-head self attention layer output at pixel $$ can be expressed as follows: Self-Attention(),: = ( ,: ) ,: val and accordingly for the multi-head case. Background on Attention Mechanisms for Vision ::: Positional Encoding for Images There are two types of positional encoding that has been used in transformer-based architectures: the absolute and relative encoding (see also tab:relworkattention in the Appendix). With absolute encodings, a (fixed or learned) vector $_{,:}$ is assigned to each pixel $$. The computation of the attention scores we saw in eq:attcoeff can then be decomposed as follows: , abs = (,: + ,:) qrykey(,: + ,:) = ,: qrykey,: + ,: qrykey,: + ,:qrykey,: + ,: qrykey,: where $$ and $$ correspond to the query and key pixels, respectively. The relative positional encoding was introduced by BIBREF4. The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel: , rel := ,: qry key ,: + ,: qry key + key ,: + key In this manner, the attention scores only depend on the shift ${{\delta }}:= - $. Above, the learnable vectors $$ and $$ are unique for each head, whereas for every shift ${{\delta }}$ the relative positional encoding $_{{{\delta }}} \in ^{D_p}$ is shared by all layers and heads. Moreover, now the key weights are split into two types: $_{\!\textit {key}}$ pertain to the input and $\widehat{}_{\!\textit {key}}$ to the relative position of pixels. Self-Attention as a Convolutional Layer This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer. Our main result is the following: Theorem 1 A multi-head self-attention layer with $N_h$ heads of dimension $D_h$, output dimension $D_{\textit {out}}$ and a relative positional encoding of dimension $D_p \ge 3$ can express any convolutional layer of kernel size $\sqrt{N_h} \times \sqrt{N_h}$ and $\min (D_h, D_{\textit {out}})$ output channels. The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\Delta \!\!\!\!\Delta _K = \lbrace -\lfloor K/2 \rfloor , \dots , \lfloor K/2 \rfloor \rbrace ^2$ of all pixel shifts in a $K\times K$ kernel. The exact condition can be found in the statement of Lemma UNKREF15. Then, Lemma UNKREF17 shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the quadratic encoding: (h) := -(h) (1, -21(h), -22(h)) := (2 , 1, 2) qry = key := 0 key := The learned parameters ${{\Delta }}^{(h)} = ({{\Delta }}^{(h)}_1, {{\Delta }}^{(h)}_2)$ and $\alpha ^{(h)}$ determine the center and width of attention of each head, respectively. On the other hand, ${{\delta }}= ({{\delta }}_1, {{\delta }}_2)$ is fixed and expresses the relative shift between query and key pixels. It is important to stress that the above encoding is not the only one for which the conditions of Lemma UNKREF15 are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only $D_p = 3$ dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one). Though we lack a formal proof, we conjecture that every encoding that satisfies Lemma UNKREF15 should have at least three dimensions. Self-Attention as a Convolutional Layer ::: Remark for the 1D case. Convolutional layers acting on sequences are commonly used in the literature for text BIBREF11, as well as audio BIBREF12 and time series BIBREF13. Theorem UNKREF11 can be straightforwardly extended to show that multi-head self-attention with $N_h$ heads can also simulate a 1D convolutional layer with a kernel of size $K=N_h$ with $\min (D_h, D_{\textit {out}})$ output channels using a positional encoding of dimension $D_p \ge 2$. Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence—only that it has the capacity to do so. Self-Attention as a Convolutional Layer ::: Proof of Main Theorem The proof follows directly from Lemmas UNKREF15 and UNKREF17 stated below: Lemma 1 Consider a multi-head self-attention layer consisting of $N_h = K^2$ heads, $D_h \ge D_{\textit {out}}$ and let $~:~[N_h]~\rightarrow ~{\Delta \!\!\!\!\Delta }_K$ be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds: ((h),:) = {ll 1 if (h) = - 0 otherwise. . Then, for any convolutional layer with a $K \times K$ kernel and $D_{\textit {out}}$ output channels, there exists $\lbrace _{\!\textit {val}}^{(h)}\rbrace _{h \in [N_h]}$ such that $ \operatorname{MHSA}() = \operatorname{Conv}() $ for every $\in ^{W \times H \times D_{\textit {in}}}$. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from (SECREF6) and (SECREF6) such that the effect of the multiple heads becomes more transparent: MHSA() = out+ h [Nh] ((h)) val(h) out[(h-1)Dh + 1:h Dh +1] (h) Note that each head's value matrix $_{\!\textit {val}}^{(h)} \in ^{D_{\textit {in}} \times D_{h}}$ and each block of the projection matrix $_{\textit {out}}$ of dimension $D_h \times D_{\textit {out}}$ are learned. Assuming that $D_h \ge D_{\textit {out}}$, we can replace each pair of matrices by a learned matrix $^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention: MHSA(),: = h [Nh] ( ((h),:) ,: ) (h) + out Due to the conditions of the Lemma, for the $h$-th attention head the attention probability is one when $ = - (h) $ and zero otherwise. The layer's output at pixel $$ is thus equal to MHSA() = h [Nh] - (h),: (h) + out For $K = \sqrt{N_h}$, the above can be seen to be equivalent to a convolutional layer expressed in eq. SECREF8: there is a one to one mapping (implied by map $$) between the matrices $^{(h)}$ for $h = [N_h]$ and the matrices $_{k_1,k_2,:,:}$ for all $(k_1,k_2) \in [K]^2.$ Self-Attention as a Convolutional Layer ::: Proof of Main Theorem ::: Remark about @!START@$D_h$@!END@ and @!START@$D_{\textit {out}}$@!END@. It is frequent in transformer-based architectures to set $D_h~=~D_{\textit {out}}/N_h$, hence $D_h < D_{\textit {out}}$. In that case, $^{(h)}$ can be seen to be of rank $D_{\textit {out}} - D_h$, which does not suffice to express every convolutional layer with $D_{\textit {out}}$ channels. Nevertheless, it can be seen that any $D_h$ out of $D_{\textit {out}}$ outputs of $\operatorname{MHSA}()$ can express the output of any convolutional layer with $D_h$ output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be $\min (D_h, D_{\textit {out}})$. In practice, we advise to concatenate heads of dimension $D_h = D_{\textit {out}}$ instead of splitting the $D_{\textit {out}}$ dimensions among heads to have exact re-parametrization and no “unused” channels. Lemma 2 There exists a relative encoding scheme $\lbrace _{{\delta }}\in ^{D_p}\rbrace _{{{\delta }}\in \mathbb {Z}^2}$ with $D_p \ge 3$ and parameters $_{\!\textit {qry}}, _{\!\textit {key}}, \widehat{}_{\!\textit {key}},$ with $D_p \le D_k$ such that, for every ${{\Delta }}\in \Delta \!\!\!\!\Delta _K$ there exists some vector $$ (conditioned on ${{\Delta }}$) yielding $ (_{,:})_{} = 1 $ if $ - = {{\Delta }}$ and zero, otherwise. We show by construction the existence of a $D_p=3$ dimensional relative encoding scheme yielding the required attention probabilities. As the attention probabilities are independent of the input tensor $$, we set $_{\!\textit {key}}=_{\!\textit {qry}}={0}$ which leaves only the last term of eq:attrel. Setting $\widehat{}_{\!\textit {key}}\in ^{D_k \times D_p}$ to the identity matrix (with appropriate row padding), yields $_{, } = ^{\top } _{{{\delta }}}$ where $\quad {{\delta }}:= - $. Above, we have assumed that $D_p \le D_k$ such that no information from $_{{{\delta }}}$ is lost. Now, suppose that we could write: , = -(- 2 + c) for some constant $c$. In the above expression, the maximum attention score over $_{, :}$ is $-\alpha c$ and it is reached for $_{, }$ with ${{\delta }}= {{\Delta }}$. On the other hand, the $\alpha $ coefficient can be used to scale arbitrarily the difference between $_{,{{\Delta }}}$ and the other attention scores. In this way, for ${{\delta }}= {{\Delta }}$, we have (,:) = e-(- 2+c) ' e-((- ')- 2+c) = e-- 2 e-c ' e-(- ')- 2 e-c = e-- 2 ' e-(- ')- 2 = 1 1 + ' e-(- ')- 2 = 1 and for ${{\delta }}\ne {{\Delta }}$, the equation becomes $ \lim _{\alpha \rightarrow \infty } (_{,:})_{} = 0, $ exactly as needed to satisfy the lemma statement. What remains is to prove that there exist $$ and $\lbrace _{{\delta }}\rbrace _{{{\delta }}\in \mathcal {Z}^2}$ for which eq:isoattdecomposed holds. Expanding the rhs of the equation, we have $ -\alpha (\Vert {{\delta }}- {{\Delta }}\Vert ^2 + c) = -\alpha ( \Vert {{\delta }}\Vert ^2 + \Vert {{\Delta }}\Vert ^2 - 2\langle {{\delta }}, {{\Delta }}\rangle + c )\,. $ Now if we set $ = -\alpha \, (1, -2{{\Delta }}_1, -2{{\Delta }}_2) $ and $ _{{\delta }}= (\Vert {{\delta }}\Vert ^2 , {{\delta }}_1, {{\delta }}_2), $ then which matches eq:isoattdecomposed with $c = -\Vert {{\Delta }}\Vert ^2$ and the proof is concluded. Experiments The aim of this section is to validate the applicability of our theoretical results—which state that self-attention can perform convolution—and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers, when being trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that for both cases, the attention probabilities learned tend to respect the conditions of Lemma UNKREF15, corroborating our hypothesis. Experiments ::: Implementation Details We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by BIBREF9 that combining attention features with convolutional features improves performance on Cifar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier we compare it to the standard ResNet18 BIBREF14 on the CIFAR-10 dataset BIBREF15. In all experiments, we use a $2\times 2$ invertible down-sampling BIBREF16 on the input to reduce the size of the image as storing the attention coefficient tensor requires a large amount of GPU memory. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier. We used the PyTorch library BIBREF17 and based our implementation on PyTorch Transformers. We release our code on Github and all hyper-parameters are in tab:hyper-parameter in the Appendix. Experiments ::: Quadratic Encoding As a first step, we aim to verify that, with the relative position encoding introduced in (SECREF3), attention layers learn to behave like convolutional layers. We train nine attention heads at each layer to be on par with the $3\times 3$ kernels used predominantly by the ResNet architecture. The center of attention of each head $h$ is initialized to $\Delta ^{(h)} \sim \mathcal {N}({0}, 2_2)$. fig:isoduringtraining shows how the initial positions of the heads (different colors) at layer 4 changed during training. We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel. Our intuition that Self-Attention applied to images learn convolutional filter around the queried pixel is then confirmed. fig:isoattentionfinal displays all attention head at each layer of the model at the end of the training. It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position. We also include in the Appendix a plot of the attention positions for a higher number of heads ($N_h=16$), fig:isomanyheads displays both local patterns similar to CNN and long range dependencies. Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space. To verify that our self-attention model performs equally well as a small ResNet (tab:parametersize), in fig:learnedattentionmap we display the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training. The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures. Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS. Experiments ::: Learned Relative Positional Encoding We move on to study the positional encoding used in practice by fully-attentional models on images. We implemented the 2D relative positional encoding scheme used by BIBREF0, BIBREF9: we learn a $\lfloor D_p / 2 \rfloor $ position encoding vector for each row and each column pixel shift. Hence the relative positional encoding of a key pixel at position $$ with a query pixel at position $$ is the concatenation of the row shift embedding ${{\delta }}_1$ and the column shift embedding ${{\delta }}_2$ (where ${{\delta }}= - $). We chose $D_p = D_{\textit {out}} = 400$ in the experiment. We differ from the (unpublished) implementation described by BIBREF0 in the following points: (i) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a $2\times 2$ invertible downsampling layer BIBREF16 at input, (ii) we use $D_h = D_{\textit {out}}$ instead of $D_h = D_{\textit {out}} / N_h$ backed from our theory that the effective number of learned filters is $\min (D_h, D_{\textit {out}})$, (iii) the attention scores are computed using only the relative positions of the pixels and not the data. As seen in tab:parametersize, our implementation achieves accuracy close to that of ResNet18. The attention probabilities of each head at each layer are displayed on fig:learnedattentionmap. The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the encoding from the data, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma UNKREF15 and thus Theorem UNKREF11. At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies. The phenomenon is particularly prominent for layers four to six, where the behavior of self-attention can be seen to deviate from that of convolution. We also notice that vertical symmetry is much more rare in the learned attention probabilities of high layers. This matches the intuition that, for image classification, distinguishing between what is above or below something is more crucial than what is left or right. Finally, the fact that some of the heads in the last two layers seem to be redundant, likely indicating that the computational and space complexity of the model could be amenable to further reduction, for example by pruning. Conclusion We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that learned fully-attentional models do behave similar to CNN in practice. More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters—similar to deformable convolutions BIBREF18, BIBREF19. Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series. Also, though we currently lack the computational resources to do so, we would be interested to test whether our findings are replicated for datasets of larger-scale, such as ImageNet and COCO. Appendix ::: Generalized Quadratic Positional Encoding We noticed the similarity of the attention probabilities in the quadratic positional encoding (sec:attentioncanimplementcnn) to isotropic bivariate Gaussian distributions with bounded support: (, :) = e-(- ) - 2' [W] [H] e-(' - ) - 2 . Building on this observation, we further extended our attention mechanism to non-isotropic Gaussian distribution over pixel positions. Each head is parametrized by a center of attention ${{\Delta }}$ and a covariance matrix $$ to obtain the following attention scores, , = -1 2 (- )-1 (- ) = -1 2 -1 + -1 - 1 2 -1 , where, once more, ${{\delta }}= - $. The last term can be discarded because the softmax is shift invariant and we rewrite the attention coefficient as a dot product between the head target vector $$ and the relative position encoding $_{{{\delta }}}$ (consisting of the first and second order combinations of the shift in pixels ${{\delta }}$): Appendix ::: Generalized Quadratic Positional Encoding ::: Evaluation. We trained our model using this generalized quadratic relative position encoding. We were curious to see if, using the above encoding the self-attention model would learn to attend to non-isotropic groups of pixels—thus forming unseen patterns in CNNs. Each head was parametrized by ${{\Delta }}\in ^2$ and $^{-1/2} \in ^{2\times 2}$ to ensure that the covariance matrix remained positive semi-definite. We initialized the center of attention to ${{\Delta }}^{(h)} \sim \mathcal {N}(\mathbf {0}, 2 _2)$ and $^{-1/2} = _2 + \mathcal {N}(\mathbf {0}, 0.01 _2)$ so that initial attention probabilities were close to an isotropic Gaussian. fig:nonisoattentionfinal shows that the network did learn non-isotropic attention probability patterns, especially in high layers. Nevertheless, the fact that we do not obtain any performance improvement seems to suggest that attention non-isotropy is not particularly helpful in practice—the quadratic positional encoding suffices. Appendix ::: Generalized Quadratic Positional Encoding ::: Pruning degenerated heads. We noticed that certain non-isotropic attention heads attended on “non-intuitive” patches of pixels: either attending a very thin stripe of pixels, when $^{-1}$ was almost singular, or attending all pixels uniformly, when $^{-1}$ was close to $\mathbf {0}$ (i.e. constant attention scores). We asked ourselves, are such attention patterns indeed useful for the model or are these heads degenerated and unused? To find out, we pruned all heads having largest eigen-values smaller than $10^{-5}$ or condition number (ratio of the biggest and smallest eigen-values) greater than $10^5$. Specifically in our model with 6-layer and 9-heads each, we pruned $[2, 4, 1, 2, 6, 0]$ heads from the first to the last layer. This means that these layers cannot express a $3 \times 3$ kernel anymore. As shown in yellow on fig:learningcurve, this ablation initially hurts a bit the performance, probably due to off biases, but after a few epochs of continued training with a smaller learning rate (divided by 10) the accuracy recovers its unpruned value. Hence, without sacrificing performance, we reduce the size of the parameters and the number of FLOPS by a fourth. Appendix ::: Increasing the Number of Heads For completeness, we also tested increasing the number of heads of our architecture from 9 to 16. Similar to Figure FIGREF23, we see that the network distinguishes two main types of attention patterns. Localized heads (i.e., those that attend to nearly individual pixels) appear more frequently in the first few layers. The self-attention layer uses these heads to act in a manner similar to how convolutional layers do. Heads with less-localized attention become more common at higher layers.
No
5367f8979488aaa420d8a69fec656851095ecacb
5367f8979488aaa420d8a69fec656851095ecacb_0
Q: What numerical experiments they perform? Text: Introduction Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer BIBREF1. Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 BIBREF2, BERT BIBREF3 and Transformer-XL BIBREF4, seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks BIBREF5 and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism—originally introduced in Neural Machine Translation to better handle long-range dependencies BIBREF6. With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest. Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks. Self-attention was first added to CNN by either using channel-based attention BIBREF7 or non-local relationships across the image BIBREF8. More recently, BIBREF9 augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks. Interestingly, BIBREF0 noticed that, even though state-of-the art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy. These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers? From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function—including a CNN. Indeed, BIBREF10 showed that a multi-layer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic. Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open. Introduction ::: Contributions. In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers: From a theoretical perspective, we provide a constructive proof showing that self-attention layers can express any convolutional layers. Specifically, we show that a single multi-head self-attention layer using relative positional encoding can be re-parametrized to express any convolutional layer. Our insights lead to a relative positional encoding, that we refer to as quadratic encoding, that is very efficient in terms of size. Our experiments show that the first few layers of attention-only architectures BIBREF0 do learn to attend on grid-like pattern around each query pixel, similar to our theoretical construction. Strikingly, this behavior is confirmed both for our quadratic encoding, but also for relative encoding that is learned during training. Our results seem to suggest that localized convolution is the right inductive bias for the first few layers of an image classifying network. For deeper layers, on the other hand, long-range as well as horizontally-symmetric inter-dependencies become more relevant. For reproducibility purposes, our code is publicly available on GitHub. Background on Attention Mechanisms for Vision We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings. Background on Attention Mechanisms for Vision ::: The Multi-Head Self-Attention Layer Let $\in ^{T\times D_{\textit {in}}}$ be an input matrix consisting of $T$ tokens in of ${D_{\textit {in}}}$ dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \in [T]$ from $D_{\textit {in}}$ to $D_{\textit {out}}$ dimensions as follows: Self-Attention()t,: := ( t,: ) val, where we refer to the elements of the $T \times T$ matrix := qrykey as attention scores and the softmax output as attention probabilities. The layer is parametrized by a query matrix $_{\!\textit {qry}}\in ^{D_{\textit {in}} \times D_{k}}$, a key matrix $_{\!\textit {key}}\in ^{D_{\textit {in}} \times D_{k}}$ and a value matrix $_{\!\textit {val}}\in ^{D_{\textit {in}} \times D_{\textit {out}}}$.For simplicity, we exclude any residual connections, batch normalization and constant factors. A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention := (+ ) qrykey(+ ), where $\in ^{T \times D_{\textit {in}}}$ contains the embedding vectors for each position. More generally, $$ may be substituted by any function that returns a vector representation of the position. It has been found beneficial in practice to replicate this self-attention mechanism into multiple heads, each being able to focus on different parts of the input by using different query, key and value matrices. In multi-head self-attention, the output of the $N_h$ heads of output dimension $D_h$ are concatenated and projected to dimension $D_{\textit {out}}$ as follows: MHSA() := *concath [Nh][Self-Attentionh()] out + out and two new parameters are introduced: the projection matrix $_{\!\textit {out}} \in ^{N_h D_h \times D_{\textit {out}}}$ and a bias term $_{\textit {out}}\in ^{D_{\textit {out}}}$. Background on Attention Mechanisms for Vision ::: Attention for Images Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor $~\in ~^{W\times H \times D_{\textit {in}}}$ of width $W$, height $H$ and $D_{\textit {in}}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by Conv()i,j,: := (1, 2) K 1,2,:,: i+1, j+2, : + , where $$ is the $K \times K \times D_{\textit {out}} \times D_{\textit {in}}$ weight tensor , $\in ^{D_{\textit {out}}}$ is the bias vector and the set contains all possible shifts appearing when convolving the image with a $K\times K$ kernel. In the following, we review how self-attention can be adapted from 1D sequences to images. With images, rather than tokens, we have query and key pixels $, \in [W] \times [H]$. Accordingly, the input is a tensor $$ of dimension $W \times H \times D_{\textit {in}}$ and each attention score associates a query and a key pixel. To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if $= (i,j)$, we write $_{,:}$ and $_{,:}$ to mean $_{i, j,:}$ and $_{i, j,:,:}$, respectively. With this notation in place, the multi-head self attention layer output at pixel $$ can be expressed as follows: Self-Attention(),: = ( ,: ) ,: val and accordingly for the multi-head case. Background on Attention Mechanisms for Vision ::: Positional Encoding for Images There are two types of positional encoding that has been used in transformer-based architectures: the absolute and relative encoding (see also tab:relworkattention in the Appendix). With absolute encodings, a (fixed or learned) vector $_{,:}$ is assigned to each pixel $$. The computation of the attention scores we saw in eq:attcoeff can then be decomposed as follows: , abs = (,: + ,:) qrykey(,: + ,:) = ,: qrykey,: + ,: qrykey,: + ,:qrykey,: + ,: qrykey,: where $$ and $$ correspond to the query and key pixels, respectively. The relative positional encoding was introduced by BIBREF4. The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel: , rel := ,: qry key ,: + ,: qry key + key ,: + key In this manner, the attention scores only depend on the shift ${{\delta }}:= - $. Above, the learnable vectors $$ and $$ are unique for each head, whereas for every shift ${{\delta }}$ the relative positional encoding $_{{{\delta }}} \in ^{D_p}$ is shared by all layers and heads. Moreover, now the key weights are split into two types: $_{\!\textit {key}}$ pertain to the input and $\widehat{}_{\!\textit {key}}$ to the relative position of pixels. Self-Attention as a Convolutional Layer This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer. Our main result is the following: Theorem 1 A multi-head self-attention layer with $N_h$ heads of dimension $D_h$, output dimension $D_{\textit {out}}$ and a relative positional encoding of dimension $D_p \ge 3$ can express any convolutional layer of kernel size $\sqrt{N_h} \times \sqrt{N_h}$ and $\min (D_h, D_{\textit {out}})$ output channels. The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\Delta \!\!\!\!\Delta _K = \lbrace -\lfloor K/2 \rfloor , \dots , \lfloor K/2 \rfloor \rbrace ^2$ of all pixel shifts in a $K\times K$ kernel. The exact condition can be found in the statement of Lemma UNKREF15. Then, Lemma UNKREF17 shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the quadratic encoding: (h) := -(h) (1, -21(h), -22(h)) := (2 , 1, 2) qry = key := 0 key := The learned parameters ${{\Delta }}^{(h)} = ({{\Delta }}^{(h)}_1, {{\Delta }}^{(h)}_2)$ and $\alpha ^{(h)}$ determine the center and width of attention of each head, respectively. On the other hand, ${{\delta }}= ({{\delta }}_1, {{\delta }}_2)$ is fixed and expresses the relative shift between query and key pixels. It is important to stress that the above encoding is not the only one for which the conditions of Lemma UNKREF15 are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only $D_p = 3$ dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one). Though we lack a formal proof, we conjecture that every encoding that satisfies Lemma UNKREF15 should have at least three dimensions. Self-Attention as a Convolutional Layer ::: Remark for the 1D case. Convolutional layers acting on sequences are commonly used in the literature for text BIBREF11, as well as audio BIBREF12 and time series BIBREF13. Theorem UNKREF11 can be straightforwardly extended to show that multi-head self-attention with $N_h$ heads can also simulate a 1D convolutional layer with a kernel of size $K=N_h$ with $\min (D_h, D_{\textit {out}})$ output channels using a positional encoding of dimension $D_p \ge 2$. Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence—only that it has the capacity to do so. Self-Attention as a Convolutional Layer ::: Proof of Main Theorem The proof follows directly from Lemmas UNKREF15 and UNKREF17 stated below: Lemma 1 Consider a multi-head self-attention layer consisting of $N_h = K^2$ heads, $D_h \ge D_{\textit {out}}$ and let $~:~[N_h]~\rightarrow ~{\Delta \!\!\!\!\Delta }_K$ be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds: ((h),:) = {ll 1 if (h) = - 0 otherwise. . Then, for any convolutional layer with a $K \times K$ kernel and $D_{\textit {out}}$ output channels, there exists $\lbrace _{\!\textit {val}}^{(h)}\rbrace _{h \in [N_h]}$ such that $ \operatorname{MHSA}() = \operatorname{Conv}() $ for every $\in ^{W \times H \times D_{\textit {in}}}$. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from (SECREF6) and (SECREF6) such that the effect of the multiple heads becomes more transparent: MHSA() = out+ h [Nh] ((h)) val(h) out[(h-1)Dh + 1:h Dh +1] (h) Note that each head's value matrix $_{\!\textit {val}}^{(h)} \in ^{D_{\textit {in}} \times D_{h}}$ and each block of the projection matrix $_{\textit {out}}$ of dimension $D_h \times D_{\textit {out}}$ are learned. Assuming that $D_h \ge D_{\textit {out}}$, we can replace each pair of matrices by a learned matrix $^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention: MHSA(),: = h [Nh] ( ((h),:) ,: ) (h) + out Due to the conditions of the Lemma, for the $h$-th attention head the attention probability is one when $ = - (h) $ and zero otherwise. The layer's output at pixel $$ is thus equal to MHSA() = h [Nh] - (h),: (h) + out For $K = \sqrt{N_h}$, the above can be seen to be equivalent to a convolutional layer expressed in eq. SECREF8: there is a one to one mapping (implied by map $$) between the matrices $^{(h)}$ for $h = [N_h]$ and the matrices $_{k_1,k_2,:,:}$ for all $(k_1,k_2) \in [K]^2.$ Self-Attention as a Convolutional Layer ::: Proof of Main Theorem ::: Remark about @!START@$D_h$@!END@ and @!START@$D_{\textit {out}}$@!END@. It is frequent in transformer-based architectures to set $D_h~=~D_{\textit {out}}/N_h$, hence $D_h < D_{\textit {out}}$. In that case, $^{(h)}$ can be seen to be of rank $D_{\textit {out}} - D_h$, which does not suffice to express every convolutional layer with $D_{\textit {out}}$ channels. Nevertheless, it can be seen that any $D_h$ out of $D_{\textit {out}}$ outputs of $\operatorname{MHSA}()$ can express the output of any convolutional layer with $D_h$ output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be $\min (D_h, D_{\textit {out}})$. In practice, we advise to concatenate heads of dimension $D_h = D_{\textit {out}}$ instead of splitting the $D_{\textit {out}}$ dimensions among heads to have exact re-parametrization and no “unused” channels. Lemma 2 There exists a relative encoding scheme $\lbrace _{{\delta }}\in ^{D_p}\rbrace _{{{\delta }}\in \mathbb {Z}^2}$ with $D_p \ge 3$ and parameters $_{\!\textit {qry}}, _{\!\textit {key}}, \widehat{}_{\!\textit {key}},$ with $D_p \le D_k$ such that, for every ${{\Delta }}\in \Delta \!\!\!\!\Delta _K$ there exists some vector $$ (conditioned on ${{\Delta }}$) yielding $ (_{,:})_{} = 1 $ if $ - = {{\Delta }}$ and zero, otherwise. We show by construction the existence of a $D_p=3$ dimensional relative encoding scheme yielding the required attention probabilities. As the attention probabilities are independent of the input tensor $$, we set $_{\!\textit {key}}=_{\!\textit {qry}}={0}$ which leaves only the last term of eq:attrel. Setting $\widehat{}_{\!\textit {key}}\in ^{D_k \times D_p}$ to the identity matrix (with appropriate row padding), yields $_{, } = ^{\top } _{{{\delta }}}$ where $\quad {{\delta }}:= - $. Above, we have assumed that $D_p \le D_k$ such that no information from $_{{{\delta }}}$ is lost. Now, suppose that we could write: , = -(- 2 + c) for some constant $c$. In the above expression, the maximum attention score over $_{, :}$ is $-\alpha c$ and it is reached for $_{, }$ with ${{\delta }}= {{\Delta }}$. On the other hand, the $\alpha $ coefficient can be used to scale arbitrarily the difference between $_{,{{\Delta }}}$ and the other attention scores. In this way, for ${{\delta }}= {{\Delta }}$, we have (,:) = e-(- 2+c) ' e-((- ')- 2+c) = e-- 2 e-c ' e-(- ')- 2 e-c = e-- 2 ' e-(- ')- 2 = 1 1 + ' e-(- ')- 2 = 1 and for ${{\delta }}\ne {{\Delta }}$, the equation becomes $ \lim _{\alpha \rightarrow \infty } (_{,:})_{} = 0, $ exactly as needed to satisfy the lemma statement. What remains is to prove that there exist $$ and $\lbrace _{{\delta }}\rbrace _{{{\delta }}\in \mathcal {Z}^2}$ for which eq:isoattdecomposed holds. Expanding the rhs of the equation, we have $ -\alpha (\Vert {{\delta }}- {{\Delta }}\Vert ^2 + c) = -\alpha ( \Vert {{\delta }}\Vert ^2 + \Vert {{\Delta }}\Vert ^2 - 2\langle {{\delta }}, {{\Delta }}\rangle + c )\,. $ Now if we set $ = -\alpha \, (1, -2{{\Delta }}_1, -2{{\Delta }}_2) $ and $ _{{\delta }}= (\Vert {{\delta }}\Vert ^2 , {{\delta }}_1, {{\delta }}_2), $ then which matches eq:isoattdecomposed with $c = -\Vert {{\Delta }}\Vert ^2$ and the proof is concluded. Experiments The aim of this section is to validate the applicability of our theoretical results—which state that self-attention can perform convolution—and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers, when being trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that for both cases, the attention probabilities learned tend to respect the conditions of Lemma UNKREF15, corroborating our hypothesis. Experiments ::: Implementation Details We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by BIBREF9 that combining attention features with convolutional features improves performance on Cifar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier we compare it to the standard ResNet18 BIBREF14 on the CIFAR-10 dataset BIBREF15. In all experiments, we use a $2\times 2$ invertible down-sampling BIBREF16 on the input to reduce the size of the image as storing the attention coefficient tensor requires a large amount of GPU memory. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier. We used the PyTorch library BIBREF17 and based our implementation on PyTorch Transformers. We release our code on Github and all hyper-parameters are in tab:hyper-parameter in the Appendix. Experiments ::: Quadratic Encoding As a first step, we aim to verify that, with the relative position encoding introduced in (SECREF3), attention layers learn to behave like convolutional layers. We train nine attention heads at each layer to be on par with the $3\times 3$ kernels used predominantly by the ResNet architecture. The center of attention of each head $h$ is initialized to $\Delta ^{(h)} \sim \mathcal {N}({0}, 2_2)$. fig:isoduringtraining shows how the initial positions of the heads (different colors) at layer 4 changed during training. We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel. Our intuition that Self-Attention applied to images learn convolutional filter around the queried pixel is then confirmed. fig:isoattentionfinal displays all attention head at each layer of the model at the end of the training. It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position. We also include in the Appendix a plot of the attention positions for a higher number of heads ($N_h=16$), fig:isomanyheads displays both local patterns similar to CNN and long range dependencies. Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space. To verify that our self-attention model performs equally well as a small ResNet (tab:parametersize), in fig:learnedattentionmap we display the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training. The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures. Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS. Experiments ::: Learned Relative Positional Encoding We move on to study the positional encoding used in practice by fully-attentional models on images. We implemented the 2D relative positional encoding scheme used by BIBREF0, BIBREF9: we learn a $\lfloor D_p / 2 \rfloor $ position encoding vector for each row and each column pixel shift. Hence the relative positional encoding of a key pixel at position $$ with a query pixel at position $$ is the concatenation of the row shift embedding ${{\delta }}_1$ and the column shift embedding ${{\delta }}_2$ (where ${{\delta }}= - $). We chose $D_p = D_{\textit {out}} = 400$ in the experiment. We differ from the (unpublished) implementation described by BIBREF0 in the following points: (i) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a $2\times 2$ invertible downsampling layer BIBREF16 at input, (ii) we use $D_h = D_{\textit {out}}$ instead of $D_h = D_{\textit {out}} / N_h$ backed from our theory that the effective number of learned filters is $\min (D_h, D_{\textit {out}})$, (iii) the attention scores are computed using only the relative positions of the pixels and not the data. As seen in tab:parametersize, our implementation achieves accuracy close to that of ResNet18. The attention probabilities of each head at each layer are displayed on fig:learnedattentionmap. The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the encoding from the data, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma UNKREF15 and thus Theorem UNKREF11. At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies. The phenomenon is particularly prominent for layers four to six, where the behavior of self-attention can be seen to deviate from that of convolution. We also notice that vertical symmetry is much more rare in the learned attention probabilities of high layers. This matches the intuition that, for image classification, distinguishing between what is above or below something is more crucial than what is left or right. Finally, the fact that some of the heads in the last two layers seem to be redundant, likely indicating that the computational and space complexity of the model could be amenable to further reduction, for example by pruning. Conclusion We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that learned fully-attentional models do behave similar to CNN in practice. More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters—similar to deformable convolutions BIBREF18, BIBREF19. Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series. Also, though we currently lack the computational resources to do so, we would be interested to test whether our findings are replicated for datasets of larger-scale, such as ImageNet and COCO. Appendix ::: Generalized Quadratic Positional Encoding We noticed the similarity of the attention probabilities in the quadratic positional encoding (sec:attentioncanimplementcnn) to isotropic bivariate Gaussian distributions with bounded support: (, :) = e-(- ) - 2' [W] [H] e-(' - ) - 2 . Building on this observation, we further extended our attention mechanism to non-isotropic Gaussian distribution over pixel positions. Each head is parametrized by a center of attention ${{\Delta }}$ and a covariance matrix $$ to obtain the following attention scores, , = -1 2 (- )-1 (- ) = -1 2 -1 + -1 - 1 2 -1 , where, once more, ${{\delta }}= - $. The last term can be discarded because the softmax is shift invariant and we rewrite the attention coefficient as a dot product between the head target vector $$ and the relative position encoding $_{{{\delta }}}$ (consisting of the first and second order combinations of the shift in pixels ${{\delta }}$): Appendix ::: Generalized Quadratic Positional Encoding ::: Evaluation. We trained our model using this generalized quadratic relative position encoding. We were curious to see if, using the above encoding the self-attention model would learn to attend to non-isotropic groups of pixels—thus forming unseen patterns in CNNs. Each head was parametrized by ${{\Delta }}\in ^2$ and $^{-1/2} \in ^{2\times 2}$ to ensure that the covariance matrix remained positive semi-definite. We initialized the center of attention to ${{\Delta }}^{(h)} \sim \mathcal {N}(\mathbf {0}, 2 _2)$ and $^{-1/2} = _2 + \mathcal {N}(\mathbf {0}, 0.01 _2)$ so that initial attention probabilities were close to an isotropic Gaussian. fig:nonisoattentionfinal shows that the network did learn non-isotropic attention probability patterns, especially in high layers. Nevertheless, the fact that we do not obtain any performance improvement seems to suggest that attention non-isotropy is not particularly helpful in practice—the quadratic positional encoding suffices. Appendix ::: Generalized Quadratic Positional Encoding ::: Pruning degenerated heads. We noticed that certain non-isotropic attention heads attended on “non-intuitive” patches of pixels: either attending a very thin stripe of pixels, when $^{-1}$ was almost singular, or attending all pixels uniformly, when $^{-1}$ was close to $\mathbf {0}$ (i.e. constant attention scores). We asked ourselves, are such attention patterns indeed useful for the model or are these heads degenerated and unused? To find out, we pruned all heads having largest eigen-values smaller than $10^{-5}$ or condition number (ratio of the biggest and smallest eigen-values) greater than $10^5$. Specifically in our model with 6-layer and 9-heads each, we pruned $[2, 4, 1, 2, 6, 0]$ heads from the first to the last layer. This means that these layers cannot express a $3 \times 3$ kernel anymore. As shown in yellow on fig:learningcurve, this ablation initially hurts a bit the performance, probably due to off biases, but after a few epochs of continued training with a smaller learning rate (divided by 10) the accuracy recovers its unpruned value. Hence, without sacrificing performance, we reduce the size of the parameters and the number of FLOPS by a fourth. Appendix ::: Increasing the Number of Heads For completeness, we also tested increasing the number of heads of our architecture from 9 to 16. Similar to Figure FIGREF23, we see that the network distinguishes two main types of attention patterns. Localized heads (i.e., those that attend to nearly individual pixels) appear more frequently in the first few layers. The self-attention layer uses these heads to act in a manner similar to how convolutional layers do. Heads with less-localized attention become more common at higher layers.
attention probabilities learned tend to respect the conditions of Lemma UNKREF15, corroborating our hypothesis, validate that our model learns a meaningful classifier we compare it to the standard ResNet18
3cca26a9474d3b0d278e4dd57e24b227e7c2cd41
3cca26a9474d3b0d278e4dd57e24b227e7c2cd41_0
Q: What dataset is used? Text: Introduction How infants discover the words of their native languages is a long-standing question in developmental psychology BIBREF0 . Machine learning has contributed much to this discussion by showing that predictive models of language are capable of inferring the existence of word boundaries solely based on statistical properties of the input BIBREF1 , BIBREF2 , BIBREF3 . Unfortunately, the best language models, measured in terms of their ability to model language, segment quite poorly BIBREF4 , BIBREF5 , while the strongest models in terms of word segmentation are far too weak to adequately predict language BIBREF3 , BIBREF6 . Moreover, since language acquisition is ultimately a multimodal process, neural models which simplify working with multimodal data offer opportunities for future research. However, as BIBREF7 have argued, current neural models' inability to discover meaningful words is too far behind the current (non-neural) state-of-the-art to be a useful foundation. In this paper, we close this gap by introducing a neural model (§ SECREF2 ) of natural language sentences that explicitly discovers and models word-like units from completely unsegmented sequences of characters. The model generates text as a sequence of segments, where each segment is generated either character-by-character from a sequence model or as a single draw from a lexical memory of multi-character units. The segmentation decisions and decisions about the generation mechanism for each segment are latent. In order to efficiently deal with an exponential number of possible segmentations, we use a conditional semi-Markov model. The characters inside each segment are generated using non-Markovian processes, conditional on the previously generated characters (the previous segmentation decisions are forgotten). This conditional independence assumption—forgetting previous segmenation decisions—enables us to calculate and differentiate exact marginal likelihood over all possible discrete segmentation decisions with a dynamic programming algorithm, while letting the model retain the most relevant information about the generation history. There are two components to make the model work. One is a lexical memory. The memory stores pairs of a vector (key) and a string (value) appearing in the training set and the vector representation of each strings are randomly initialized and learned during training. The other is a regularizer (§ SECREF3 ) to prevent the model from overfitting to the training data. Since the lexical memory stores strings that appeared in the training data, each sentence could be generated as a single unit, thus the model can fit to the training data perfectly while generalizing poorly. The regularizer penalizes based on the expectation of the powered length of each segment. Although the length of each segment is not differentiable, the expectation is differentiable and can be computed efficiently together with the marginal likelihood for each sentence in a single forward pass. Our evaluation (§ SECREF4 –§ SECREF6 ), therefore, looks at both language modeling performance and the quality of the induced segmentations. First, we look at the segmentations induced by our model. We find that these correspond closely to human intuitions about word segments, competitive with the best existing models. These segments are obtained in models whose hyperparameters are tuned to optimize validation likelihood, whereas tuning the hyperparameters based on likelihood on our benchmark models produces poor segmentations. Second, we confirm findings BIBREF8 , BIBREF9 that show that word segmentation information leads to better language models compared to pure character models. However, in contrast to previous work, we do so without observing the segment boundaries, including in Chinese, where word boundaries are not part of the orthography. Finally, we find that both the lexicon and the regularizer are crucial for good performance, particularly in word segmentation—removing either or both significantly harms performance. Model We now describe the segmental neural language model (SNLM). Refer to Figure FIGREF1 for an illustration. The SNLM generates a character sequence INLINEFORM0 , where each INLINEFORM1 is a character in a finite character set INLINEFORM2 . Each sequence INLINEFORM3 is the concatenation of a sequence of segments INLINEFORM4 where INLINEFORM5 measures the length of the sequence in segments and each segment INLINEFORM6 is a sequence of characters, INLINEFORM7 . Intuitively, each INLINEFORM8 corresponds to one word. Let INLINEFORM9 represent the concatenation of the characters of the segments INLINEFORM10 to INLINEFORM11 , discarding segmentation information; thus INLINEFORM12 . For example if INLINEFORM13 , the underlying segmentation might be INLINEFORM14 (with INLINEFORM15 and INLINEFORM16 ), or INLINEFORM17 , or any of the INLINEFORM18 segmentation possibilities for INLINEFORM19 . The SNLM defines the distribution over INLINEFORM0 as the marginal distribution over all segmentations that give rise to INLINEFORM1 , i.e., DISPLAYFORM0 To define the probability of INLINEFORM0 , we use the chain rule, rewriting this in terms of a product of the series of conditional probabilities, INLINEFORM1 . The process stops when a special end-sequence segment INLINEFORM2 is generated. To ensure that the summation in Eq. EQREF2 is tractable, we assume the following: DISPLAYFORM0 which amounts to a conditional semi-Markov assumption—i.e., non-Markovian generation happens inside each segment, but the segment generation probability does not depend on memory of the previous segmentation decisions, only upon the sequence of characters INLINEFORM0 corresponding to the prefix character sequence INLINEFORM1 . This assumption has been employed in a number of related models to permit the use of LSTMs to represent rich history while retaining the convenience of dynamic programming inference algorithms BIBREF5 , BIBREF10 , BIBREF11 . Segment generation We model INLINEFORM0 as a mixture of two models, one that generates the segment using a sequence model and the other that generates multi-character sequences as a single event. Both are conditional on a common representation of the history, as is the mixture proportion. To represent INLINEFORM0 , we use an LSTM encoder to read the sequence of characters, where each character type INLINEFORM1 has a learned vector embedding INLINEFORM2 . Thus the history representation at time INLINEFORM3 is INLINEFORM4 . This corresponds to the standard history representation for a character-level language model, although in general we assume that our modeled data is not delimitered by whitespace. The first component model, INLINEFORM0 , generates INLINEFORM1 by sampling a sequence of characters from a LSTM language model over INLINEFORM2 and a two extra special symbols, an end-of-word symbol INLINEFORM3 and the end-of-sequence symbol INLINEFORM4 discussed above. The initial state of the LSTM is a learned transformation of INLINEFORM5 , the initial cell is INLINEFORM6 , and different parameters than the history encoding LSTM are used. During generation, each letter that is sampled (i.e., each INLINEFORM7 ) is fed back into the LSTM in the usual way and the probability of the character sequence decomposes according to the chain rule. The end-of-sequence symbol can never be generated in the initial position. The second component model, INLINEFORM0 , samples full segments from lexical memory. Lexical memory is a key-value memory containing INLINEFORM1 entries, where each key, INLINEFORM2 , a vector, is associated with a value INLINEFORM3 . The generation probability of INLINEFORM4 is defined as INLINEFORM5 where INLINEFORM0 is 1 if the INLINEFORM1 th value in memory is INLINEFORM2 and 0 otherwise, and INLINEFORM3 is a matrix obtained by stacking the INLINEFORM4 's. Note that this generation process will assign zero probability to most strings, but the alternate character model can generate anything in INLINEFORM5 . In this work, we fix the INLINEFORM0 's to be subsequences of at least length 2, and up to a maximum length INLINEFORM1 that are observed at least INLINEFORM2 times in the training data. These values are tuned as hyperparameters (See Appendix SECREF10 for details of the reported experiments). The mixture proportion, INLINEFORM0 , determines how likely the character generator is to be used at time INLINEFORM1 (the lexicon is used with probability INLINEFORM2 ). It is defined by as INLINEFORM3 . The total generation probability of INLINEFORM0 is thus: INLINEFORM1 Inference We are interested in two inference questions: first, given a sequence INLINEFORM0 , evaluate its (log) marginal likelihood; second, given INLINEFORM1 , find the most likely decomposition into segments INLINEFORM2 . To efficiently compute the marginal likelihood, we use a variant of the forward algorithm for semi-Markov models BIBREF12 , which incrementally computes a sequence of probabilities, INLINEFORM0 , where INLINEFORM1 is the marginal likelihood of generating INLINEFORM2 and concluding a segment at time INLINEFORM3 . Although there are an exponential number of segmental decompositions of INLINEFORM4 , these values can be computed using INLINEFORM5 space and INLINEFORM6 time as: DISPLAYFORM0 By letting INLINEFORM0 , then INLINEFORM1 . The most probable segmentation of a sequence INLINEFORM0 can be computed by replacing the summation with a INLINEFORM1 operator in Eq. EQREF12 and maintaining backpointers. Expected length regularization When the lexical memory contains all the substrings in the training data, the model easily overfits by copying the longest continuation from the memory. To prevent overfitting, we introduce a regularizer that penalizes based on the expectation of the exponentiated (by a hyperparameter INLINEFORM0 ) length of each segment: INLINEFORM1 This can be understood as a regularizer based on the double exponential prior identified to be effective in previous work BIBREF13 , BIBREF6 . This expectation is a differentiable function of the model parameters. Because of the linearity of the penalty across segments, it can be computed efficiently using the above dynamic programming algorithm under the expectation semiring BIBREF14 . This is particular efficient since the expectation semiring jointly computes the expectation and marginal likelihood. Training Objective The model parameters are trained by minimizing the penalized log likelihood of a training corpus INLINEFORM0 of unsegmented sentences, INLINEFORM1 Datasets We evaluate our model on both English and Chinese segmentation. For both languages we used standard datasets for word segmentation and language modeling. For all datasets, we used train, validation and test splits. Since our model assumes a closed character set, we removed validation and test samples which contain characters that do not appear in the training set. In the English corpora, whitespace characters are removed. In Chinese, they are not present to begin with. Refer to Appendix SECREF9 for dataset statistics. English The Brent corpus is a standard corpus used in statistical modeling of child language acquisition BIBREF15 , BIBREF16 . The corpus contains transcriptions of utterances directed at 13- to 23-month-old children. The corpus has two variants: an orthographic one (BR-text) and a phonemic one (BR-phono), where each character corresponds to a single English phoneme. As the Brent corpus does not have a standard train and test split, and we want to tune the parameters by measuring the fit to held-out data, we used the first 80% of the utterances for training and the next 10% for validation and the rest for test. We use the commonly used version of the PTB prepared by BIBREF17 . However, since we removed space symbols from the corpus, our cross entropy results cannot be compared to those usually reported on this dataset. Chinese Since Chinese orthography does not mark spaces between words, there have been a number of efforts to annotate word boundaries. We evaluate against two corpora that have been manually segmented according different segmentation standards. The Beijing University Corpus was one of the corpora used for the International Chinese Word Segmentation Bakeoff BIBREF18 . We use the Penn Chinese Treebank Version 5.1 BIBREF19 . It generally has a coarser segmentation than PKU (e.g., in CTB a full name, consisting of a given name and family name, is a single token), and it is a larger corpus. Experiments We compare our model to benchmark Bayesian models, which are currently the best known unsupervised word discovery models, as well as to a simple deterministic segmentation criterion based on surprisal peaks BIBREF1 on language modeling and segmentation performance. Although the Bayeisan models are shown to able to discover plausible word-like units, we found that a set of hyper-parameters that provides best performance with such model on language modeling does not produce good structures as reported in previous works. This is problematic since there is no objective criteria to find hyper-parameters in fully unsupervised manner when the model is applied to completely unknown languages or domains. Thus, our experiments are designed to assess how well the models infers word segmentations of unsegmented inputs when they are trained and tuned to maximize the likelihood of the held-out text. Results In this section, we first do a careful comparison of segmentation performance on the phonemic Brent corpus (BR-phono) across several different segmentation baselines, and we find that our model obtains competitive segmentation performance. Additionally, ablation experiments demonstrate that both lexical memory and the proposed expected length regularization are necessary for inferring good segmentations. We then show that also on other corpora, we likewise obtain segmentations better than baseline models. Finally, we also show that our model has superior performance, in terms of held-out perplexity, compared to a character-level LSTM language model. Thus, overall, our results show that we can obtain good segmentations on a variety of tasks, while still having very good language modeling performance. Related Work Learning to discover and represent temporally extended structures in a sequence is a fundamental problem in many fields. For example in language processing, unsupervised learning of multiple levels of linguistic structures such as morphemes BIBREF25 , words BIBREF3 and phrases BIBREF26 have been investigated. Recently, speech recognition have benefited from techniques that enable the discovery of subword units BIBREF27 , BIBREF5 ; however, in this work the optimally discovered substrings look very unlike orthographic words. The model proposed by BIBREF5 is essentially our model without a lexicon or the expected length regularization, i.e., ( INLINEFORM0 memory, INLINEFORM1 length). Beyond language, temporal abstraction in sequential decision making processes has been investigated for a long time in reinforcement learning. Option discovery in hierarchical reinforcement learning is formalized similarly to the approach we take (using semi-Markov decision processes where we use semi-Markov generative models), and the motivation is the same: high level options/words have very different relationships to each other than primitive actions/characters BIBREF28 , BIBREF29 , BIBREF30 . Conclusion We introduced the segmental neural language model which combines a lexicon and a character-level word generator to produce a model that both improves language modeling performance over word-agnostic character LSTMs, and it discovers latent words as well as the best existing approaches for unsupervised word discovering. This constellation of results suggests that structure discovery and predictive modeling need not be at odds with one another: the structures we observe in nature are worth modeling, even with powerful learners. Dataset statistics Table. TABREF34 summarize dataset statistics. SNLM Model Configuration For each RNN based model we used 512 dimensions for the character embeddings and the LSTMs have 512 hidden units. All the parameters, including character projection parameters, are randomly sampled from uniform distribution from INLINEFORM0 to INLINEFORM1 . The initial hidden and memory state of the LSTMs are initialized with zero. A dropout rate of 0.5 was used for all but the recurrent connections. To restrict the size of memory, we stored substrings which appeared INLINEFORM0 -times in the training corpora and tuned INLINEFORM1 with grid search. The maximum length of subsequences INLINEFORM2 was tuned on the held-out likelihood using a grid search. Tab. TABREF35 summarizes the parameters for each dataset. Note that we did not tune the hyperparameters on segmentation quality to ensure that the models are trained in a purely unsupervised manner assuming no reference segmentations are available. Learning The models were trained with the Adam update rule BIBREF22 with a learning rate of 0.01. The learning rate is divided by 4 if there is no improvement on development data. The maximum norm of the gradients was clipped at 1.0.
Brent corpus, PTB , Beijing University Corpus, Penn Chinese Treebank
8f8f2b0046e1a78bd34c0c3d6b6cb24463a8ed7f
8f8f2b0046e1a78bd34c0c3d6b6cb24463a8ed7f_0
Q: What language do they look at? Text: Introduction How infants discover the words of their native languages is a long-standing question in developmental psychology BIBREF0 . Machine learning has contributed much to this discussion by showing that predictive models of language are capable of inferring the existence of word boundaries solely based on statistical properties of the input BIBREF1 , BIBREF2 , BIBREF3 . Unfortunately, the best language models, measured in terms of their ability to model language, segment quite poorly BIBREF4 , BIBREF5 , while the strongest models in terms of word segmentation are far too weak to adequately predict language BIBREF3 , BIBREF6 . Moreover, since language acquisition is ultimately a multimodal process, neural models which simplify working with multimodal data offer opportunities for future research. However, as BIBREF7 have argued, current neural models' inability to discover meaningful words is too far behind the current (non-neural) state-of-the-art to be a useful foundation. In this paper, we close this gap by introducing a neural model (§ SECREF2 ) of natural language sentences that explicitly discovers and models word-like units from completely unsegmented sequences of characters. The model generates text as a sequence of segments, where each segment is generated either character-by-character from a sequence model or as a single draw from a lexical memory of multi-character units. The segmentation decisions and decisions about the generation mechanism for each segment are latent. In order to efficiently deal with an exponential number of possible segmentations, we use a conditional semi-Markov model. The characters inside each segment are generated using non-Markovian processes, conditional on the previously generated characters (the previous segmentation decisions are forgotten). This conditional independence assumption—forgetting previous segmenation decisions—enables us to calculate and differentiate exact marginal likelihood over all possible discrete segmentation decisions with a dynamic programming algorithm, while letting the model retain the most relevant information about the generation history. There are two components to make the model work. One is a lexical memory. The memory stores pairs of a vector (key) and a string (value) appearing in the training set and the vector representation of each strings are randomly initialized and learned during training. The other is a regularizer (§ SECREF3 ) to prevent the model from overfitting to the training data. Since the lexical memory stores strings that appeared in the training data, each sentence could be generated as a single unit, thus the model can fit to the training data perfectly while generalizing poorly. The regularizer penalizes based on the expectation of the powered length of each segment. Although the length of each segment is not differentiable, the expectation is differentiable and can be computed efficiently together with the marginal likelihood for each sentence in a single forward pass. Our evaluation (§ SECREF4 –§ SECREF6 ), therefore, looks at both language modeling performance and the quality of the induced segmentations. First, we look at the segmentations induced by our model. We find that these correspond closely to human intuitions about word segments, competitive with the best existing models. These segments are obtained in models whose hyperparameters are tuned to optimize validation likelihood, whereas tuning the hyperparameters based on likelihood on our benchmark models produces poor segmentations. Second, we confirm findings BIBREF8 , BIBREF9 that show that word segmentation information leads to better language models compared to pure character models. However, in contrast to previous work, we do so without observing the segment boundaries, including in Chinese, where word boundaries are not part of the orthography. Finally, we find that both the lexicon and the regularizer are crucial for good performance, particularly in word segmentation—removing either or both significantly harms performance. Model We now describe the segmental neural language model (SNLM). Refer to Figure FIGREF1 for an illustration. The SNLM generates a character sequence INLINEFORM0 , where each INLINEFORM1 is a character in a finite character set INLINEFORM2 . Each sequence INLINEFORM3 is the concatenation of a sequence of segments INLINEFORM4 where INLINEFORM5 measures the length of the sequence in segments and each segment INLINEFORM6 is a sequence of characters, INLINEFORM7 . Intuitively, each INLINEFORM8 corresponds to one word. Let INLINEFORM9 represent the concatenation of the characters of the segments INLINEFORM10 to INLINEFORM11 , discarding segmentation information; thus INLINEFORM12 . For example if INLINEFORM13 , the underlying segmentation might be INLINEFORM14 (with INLINEFORM15 and INLINEFORM16 ), or INLINEFORM17 , or any of the INLINEFORM18 segmentation possibilities for INLINEFORM19 . The SNLM defines the distribution over INLINEFORM0 as the marginal distribution over all segmentations that give rise to INLINEFORM1 , i.e., DISPLAYFORM0 To define the probability of INLINEFORM0 , we use the chain rule, rewriting this in terms of a product of the series of conditional probabilities, INLINEFORM1 . The process stops when a special end-sequence segment INLINEFORM2 is generated. To ensure that the summation in Eq. EQREF2 is tractable, we assume the following: DISPLAYFORM0 which amounts to a conditional semi-Markov assumption—i.e., non-Markovian generation happens inside each segment, but the segment generation probability does not depend on memory of the previous segmentation decisions, only upon the sequence of characters INLINEFORM0 corresponding to the prefix character sequence INLINEFORM1 . This assumption has been employed in a number of related models to permit the use of LSTMs to represent rich history while retaining the convenience of dynamic programming inference algorithms BIBREF5 , BIBREF10 , BIBREF11 . Segment generation We model INLINEFORM0 as a mixture of two models, one that generates the segment using a sequence model and the other that generates multi-character sequences as a single event. Both are conditional on a common representation of the history, as is the mixture proportion. To represent INLINEFORM0 , we use an LSTM encoder to read the sequence of characters, where each character type INLINEFORM1 has a learned vector embedding INLINEFORM2 . Thus the history representation at time INLINEFORM3 is INLINEFORM4 . This corresponds to the standard history representation for a character-level language model, although in general we assume that our modeled data is not delimitered by whitespace. The first component model, INLINEFORM0 , generates INLINEFORM1 by sampling a sequence of characters from a LSTM language model over INLINEFORM2 and a two extra special symbols, an end-of-word symbol INLINEFORM3 and the end-of-sequence symbol INLINEFORM4 discussed above. The initial state of the LSTM is a learned transformation of INLINEFORM5 , the initial cell is INLINEFORM6 , and different parameters than the history encoding LSTM are used. During generation, each letter that is sampled (i.e., each INLINEFORM7 ) is fed back into the LSTM in the usual way and the probability of the character sequence decomposes according to the chain rule. The end-of-sequence symbol can never be generated in the initial position. The second component model, INLINEFORM0 , samples full segments from lexical memory. Lexical memory is a key-value memory containing INLINEFORM1 entries, where each key, INLINEFORM2 , a vector, is associated with a value INLINEFORM3 . The generation probability of INLINEFORM4 is defined as INLINEFORM5 where INLINEFORM0 is 1 if the INLINEFORM1 th value in memory is INLINEFORM2 and 0 otherwise, and INLINEFORM3 is a matrix obtained by stacking the INLINEFORM4 's. Note that this generation process will assign zero probability to most strings, but the alternate character model can generate anything in INLINEFORM5 . In this work, we fix the INLINEFORM0 's to be subsequences of at least length 2, and up to a maximum length INLINEFORM1 that are observed at least INLINEFORM2 times in the training data. These values are tuned as hyperparameters (See Appendix SECREF10 for details of the reported experiments). The mixture proportion, INLINEFORM0 , determines how likely the character generator is to be used at time INLINEFORM1 (the lexicon is used with probability INLINEFORM2 ). It is defined by as INLINEFORM3 . The total generation probability of INLINEFORM0 is thus: INLINEFORM1 Inference We are interested in two inference questions: first, given a sequence INLINEFORM0 , evaluate its (log) marginal likelihood; second, given INLINEFORM1 , find the most likely decomposition into segments INLINEFORM2 . To efficiently compute the marginal likelihood, we use a variant of the forward algorithm for semi-Markov models BIBREF12 , which incrementally computes a sequence of probabilities, INLINEFORM0 , where INLINEFORM1 is the marginal likelihood of generating INLINEFORM2 and concluding a segment at time INLINEFORM3 . Although there are an exponential number of segmental decompositions of INLINEFORM4 , these values can be computed using INLINEFORM5 space and INLINEFORM6 time as: DISPLAYFORM0 By letting INLINEFORM0 , then INLINEFORM1 . The most probable segmentation of a sequence INLINEFORM0 can be computed by replacing the summation with a INLINEFORM1 operator in Eq. EQREF12 and maintaining backpointers. Expected length regularization When the lexical memory contains all the substrings in the training data, the model easily overfits by copying the longest continuation from the memory. To prevent overfitting, we introduce a regularizer that penalizes based on the expectation of the exponentiated (by a hyperparameter INLINEFORM0 ) length of each segment: INLINEFORM1 This can be understood as a regularizer based on the double exponential prior identified to be effective in previous work BIBREF13 , BIBREF6 . This expectation is a differentiable function of the model parameters. Because of the linearity of the penalty across segments, it can be computed efficiently using the above dynamic programming algorithm under the expectation semiring BIBREF14 . This is particular efficient since the expectation semiring jointly computes the expectation and marginal likelihood. Training Objective The model parameters are trained by minimizing the penalized log likelihood of a training corpus INLINEFORM0 of unsegmented sentences, INLINEFORM1 Datasets We evaluate our model on both English and Chinese segmentation. For both languages we used standard datasets for word segmentation and language modeling. For all datasets, we used train, validation and test splits. Since our model assumes a closed character set, we removed validation and test samples which contain characters that do not appear in the training set. In the English corpora, whitespace characters are removed. In Chinese, they are not present to begin with. Refer to Appendix SECREF9 for dataset statistics. English The Brent corpus is a standard corpus used in statistical modeling of child language acquisition BIBREF15 , BIBREF16 . The corpus contains transcriptions of utterances directed at 13- to 23-month-old children. The corpus has two variants: an orthographic one (BR-text) and a phonemic one (BR-phono), where each character corresponds to a single English phoneme. As the Brent corpus does not have a standard train and test split, and we want to tune the parameters by measuring the fit to held-out data, we used the first 80% of the utterances for training and the next 10% for validation and the rest for test. We use the commonly used version of the PTB prepared by BIBREF17 . However, since we removed space symbols from the corpus, our cross entropy results cannot be compared to those usually reported on this dataset. Chinese Since Chinese orthography does not mark spaces between words, there have been a number of efforts to annotate word boundaries. We evaluate against two corpora that have been manually segmented according different segmentation standards. The Beijing University Corpus was one of the corpora used for the International Chinese Word Segmentation Bakeoff BIBREF18 . We use the Penn Chinese Treebank Version 5.1 BIBREF19 . It generally has a coarser segmentation than PKU (e.g., in CTB a full name, consisting of a given name and family name, is a single token), and it is a larger corpus. Experiments We compare our model to benchmark Bayesian models, which are currently the best known unsupervised word discovery models, as well as to a simple deterministic segmentation criterion based on surprisal peaks BIBREF1 on language modeling and segmentation performance. Although the Bayeisan models are shown to able to discover plausible word-like units, we found that a set of hyper-parameters that provides best performance with such model on language modeling does not produce good structures as reported in previous works. This is problematic since there is no objective criteria to find hyper-parameters in fully unsupervised manner when the model is applied to completely unknown languages or domains. Thus, our experiments are designed to assess how well the models infers word segmentations of unsegmented inputs when they are trained and tuned to maximize the likelihood of the held-out text. Results In this section, we first do a careful comparison of segmentation performance on the phonemic Brent corpus (BR-phono) across several different segmentation baselines, and we find that our model obtains competitive segmentation performance. Additionally, ablation experiments demonstrate that both lexical memory and the proposed expected length regularization are necessary for inferring good segmentations. We then show that also on other corpora, we likewise obtain segmentations better than baseline models. Finally, we also show that our model has superior performance, in terms of held-out perplexity, compared to a character-level LSTM language model. Thus, overall, our results show that we can obtain good segmentations on a variety of tasks, while still having very good language modeling performance. Related Work Learning to discover and represent temporally extended structures in a sequence is a fundamental problem in many fields. For example in language processing, unsupervised learning of multiple levels of linguistic structures such as morphemes BIBREF25 , words BIBREF3 and phrases BIBREF26 have been investigated. Recently, speech recognition have benefited from techniques that enable the discovery of subword units BIBREF27 , BIBREF5 ; however, in this work the optimally discovered substrings look very unlike orthographic words. The model proposed by BIBREF5 is essentially our model without a lexicon or the expected length regularization, i.e., ( INLINEFORM0 memory, INLINEFORM1 length). Beyond language, temporal abstraction in sequential decision making processes has been investigated for a long time in reinforcement learning. Option discovery in hierarchical reinforcement learning is formalized similarly to the approach we take (using semi-Markov decision processes where we use semi-Markov generative models), and the motivation is the same: high level options/words have very different relationships to each other than primitive actions/characters BIBREF28 , BIBREF29 , BIBREF30 . Conclusion We introduced the segmental neural language model which combines a lexicon and a character-level word generator to produce a model that both improves language modeling performance over word-agnostic character LSTMs, and it discovers latent words as well as the best existing approaches for unsupervised word discovering. This constellation of results suggests that structure discovery and predictive modeling need not be at odds with one another: the structures we observe in nature are worth modeling, even with powerful learners. Dataset statistics Table. TABREF34 summarize dataset statistics. SNLM Model Configuration For each RNN based model we used 512 dimensions for the character embeddings and the LSTMs have 512 hidden units. All the parameters, including character projection parameters, are randomly sampled from uniform distribution from INLINEFORM0 to INLINEFORM1 . The initial hidden and memory state of the LSTMs are initialized with zero. A dropout rate of 0.5 was used for all but the recurrent connections. To restrict the size of memory, we stored substrings which appeared INLINEFORM0 -times in the training corpora and tuned INLINEFORM1 with grid search. The maximum length of subsequences INLINEFORM2 was tuned on the held-out likelihood using a grid search. Tab. TABREF35 summarizes the parameters for each dataset. Note that we did not tune the hyperparameters on segmentation quality to ensure that the models are trained in a purely unsupervised manner assuming no reference segmentations are available. Learning The models were trained with the Adam update rule BIBREF22 with a learning rate of 0.01. The learning rate is divided by 4 if there is no improvement on development data. The maximum norm of the gradients was clipped at 1.0.
English, Chinese
e37c32fce68759b2272adc1e44ea91c1a7c47059
e37c32fce68759b2272adc1e44ea91c1a7c47059_0
Q: What dierse domains and languages are present in new datasets? Text: Introduction Text classification can be categorized according to the text length of the data, from sentence-level classification BIBREF0 to document-level classification BIBREF1, BIBREF2. One kind of such task is sentiment classification BIBREF3, a subtask of sentiment analysis BIBREF4, BIBREF5 where we are to predict the sentiment/rating given a review written by a user. In some domains, the length of these reviews varies widely. For example, well-known review websites in East Asia such as Naver Movies and Douban Movies provide two channels for users to write reviews, depending on their preferred length. Figure FIGREF4 shows the review channels provided in Naver Movies. The first channel is a short review channel, which contains large amounts of reviews, and enforces users to write short reviews accompanied by rating labels. Although labeled, these reviews lack expressiveness to extract useful information. In contrast, the second channel is a long review channel, which contains few long and detailed reviews with descriptions of different aspects about the product/service. Despite being more expressive, long reviews are often not accompanied by sentiment labels, which most supervised sentiment classification models would require. We study the “transferability” from one review channel to the other. That is, we try to answer whether a text classifier trained on a dataset with length $\alpha $ can predict a text with length $\beta $, where $\alpha $ and $\beta $ differ by a large margin (e.g., sentences versus paragraphs). This is an important question because there are scenarios where we may have better and more sufficient labeled text data, but we want to classify unlabeled texts with different length. For long to short transfer, more expressive long reviews can be leveraged for training a sentiment classifier for short and context-sparse reviews. For short to long transfer, large amounts of short reviews can be used as supervision to train a classifier for long reviews. To motivate the non-triviality of such transfer, we train an out-channel (OC) classifier that uses short texts to predict long texts, and an in-channel (IC) classifier that uses long texts on both training and prediction. We also experiment conversely. We use three kinds of classifiers: bag-of-words (BoW), convolutional neural networks BIBREF6, and BERT multilingual BIBREF7. We calculate the transfer loss BIBREF8, which is the difference between the out-channel and in-channel classifier errors (i.e., $\text{TL}=0$ means trivially transferable). Table TABREF7 shows that, though using a better inductive bias such as CNN and BERT seems to slightly lower TL, it remains significantly high, consistently suggesting that length transfer is non-trivial. Our first contribution is thus to define a new task called Cross Length Transfer (CLT). CLT is a task similar to Cross Domain BIBREF9 and Cross Lingual Transfer BIBREF10 where the difference between the source and target texts is the text length, having non-trivial influence that is shown in Table TABREF7. Our second contribution is to show that models from similar tasks (e.g. Cross Domain Transfer and Multiple Instance Learning) are not effective for CLT and even yield negative transfer, as we elaborate in Section SECREF10 and empirically show in Section SECREF4. Finally, we present two new models specifically for CLT: a strong baseline called BaggedCNN that treats long texts as bags containing short texts, and a state-of-the-art CLT model called Length Transfer Networks (LeTraNets). LeTraNets enables a two-way encoding scheme using multiple training mechanims, and accepts both short and long text inputs, where one such input is created artificially through concatenation or segmentation. Table TABREF7 shows that LeTraNets has the best transfer loss, and sometimes perform better than in-channel classifier (when TL is less than zero). We test our models using the multiple benchmark datasets we gathered and show that models from other tasks perform worse than our proposed strong baseline and that LeTraNets performs the best among all models. To the best of our knowledge, we are the first to study CLT. Cross Length Transfer Cross Length Transfer (CLT) is an unsupervised transfer learning task in which the setting is that the sampling distributions of the training and test data are different because the texts lengths are different (e.g., sentences and paragraphs). Formally, we suppose two sets of texts: a source set $\mathcal {S}$ in which we have labels, and a target set $\mathcal {T}$ in which we want to predict the labels. Moreover, we know that the text length distributions of $\mathcal {S}$ and $\mathcal {T}$ are different, such that an equality case exists as $|\mathcal {S}| = r |\mathcal {T}|$, where $|\mathcal {X}|$ is the mean length of the set $\mathcal {X}$, and $r \ne 1$ is a non-negative rate of difference between two mean lengths. There are two subtasks: long to short transfer where $r>1$ and thus $\mathcal {S}$ contains longer texts, and short to long transfer where $r<1$ and thus $\mathcal {S}$ contains shorter texts. A CLT model should effectively learn to predict the labels of $\mathcal {T}$, on both scenarios. A concrete and simple example is when $\mathcal {S}$ contains labeled sentence reviews and $\mathcal {T}$ contains unlabeled paragraph reviews. A CLT model uses $\mathcal {S}$ for training to effectively predict labels of reviews in $\mathcal {T}$. Also, the same CLT model should be able to do effective prediction vice versa, i.e., when $\mathcal {S}$ are paragraph reviews and $\mathcal {T}$ are sentence reviews. Previous unsupervised transfer learning tasks, i.e. Cross Domain Transfer BIBREF9 and Cross Lingual Transfer BIBREF11, are similar to CLT but have concrete differences. Generally, the goal of these tasks is to map semantic domains, contextually or linguistically, of both $\mathcal {S}$ and $\mathcal {T}$ into a shared space, by aligning the vocabulary BIBREF12, BIBREF13, expanding domain-specific lexicons BIBREF14, BIBREF15, generating labeled samples BIBREF16, and learning to indiscriminate between domains BIBREF17, BIBREF18. These methods are generally symmetric; i.e., even when $\mathcal {S}$ and $\mathcal {T}$ interchange, the same method can be applied easily. However, in CLT, both $\mathcal {S}$ and $\mathcal {T}$ are already in the same contextual and linguistic domains, thus previous methods would not work. Also, CLT brings two new challenges against devising a symmetric model. First, texts with different context richness may have different properties they focus on: hierarchical structures BIBREF2 may be more important for document-level reviews while finding lexical/phrasal cues BIBREF0 may be more important for sentence-level reviews. Second, words on texts with different lengths may have different semantic intensity. For example, “good” may have a very high positive sentiment intensity on short texts, and a relatively low positive sentiment intensity on long ones. Cross Length Transfer ::: Benchmark Datasets We provide three pairs of short/long datasets from different domains (movies and restaurants) and from different languages (English and Korean) suitable for the task: Mov_en, Res_en, and Mov_ko. Most of the datasets are from previous literature and are gathered differently The Mov_en datasets are gathered from different websites; the short dataset consists of hand-picked sentences by BIBREF19 from document-level reviews from the Rotten Tomatoes website, while the long dataset consists of reviews from the IMDB website obtained by BIBREF20. The Res_en dataset consists of reviews from Yelp, where the short dataset consists of reviews with character lengths less than 140 from BIBREF21, while reviews in the long dataset are gathered from BIBREF20. We also share new short/long datasets Mov_ko, which are gathered from two different channels, as shown in Figure FIGREF4, available in Naver Movies. Unlike previous datasets BIBREF9, BIBREF22 where they used polarity/binary (e.g., positive or negative) labels as classes, we also provide fine-grained classes, with five classes of different sentiment intensities (e.g., 1 is strong negative, 5 is strong positive), for Res_en and Mov_ko. Following the Cross Domain Transfer setting BIBREF9, BIBREF23, BIBREF24, we limit the size of the dataset to be small-scale to focus on the main task at hand. This ensures that models focus on the transfer task, and decrease the influence of other factors that can be found when using larger datasets. Finally, following BIBREF22, we provide additional unlabeled data for those models that need them BIBREF9, BIBREF23, except for the long dataset of Mov_ko, where the labeled reviews are very limited. We show the dataset statistics in Table TABREF9, and share the datasets here: https://github.com/rktamplayo/LeTraNets. Cross Length Transfer ::: Possible Existing Solutions ::: Cross Domain Transfer (CDT) CDT offers models that effectively transfer domain-independent features from two different domains. The most popular non-neural CDT model is Structural Correspondence Learning BIBREF25blitzer2007biographies, a method that identifies feature correspondence from different domains using pivot features. A recent neuralized extension is Neural SCL BIBREF26ziser2017neural, in which an autoencoder module is integrated to SCL. The CDT literature is vast, and we refer the readers to BIBREF27 and BIBREF28 for overviews. Although these models may see improvements due to a possible difference in vocabulary (especially when the review channels are different), these improvements may be marginal since the domain of the datasets is the same. Cross Length Transfer ::: Possible Existing Solutions ::: Multiple Instance Learning (MIL) MIL is a task where given the labels of a bag of multiple instances, we are to label the individual instances BIBREF29. In the text classification domain, MIL is often devised as segment-level classification BIBREF30, BIBREF31, where documents are bags and sentences in the documents are segments. The most recent MIL model is the Multiple Instance Learning Network BIBREF32;angelidis2018multiple, where they used attention-based polarity scoring to identify segment labels. MIL models can be used in long to short transfer, where we assume that segment labels in long texts can be used to label short reviews. However, they (a) assume that segments from long data, which rely on inter-sentence semantics, are comparable to self-contained short texts, and (b) are ineffective on short to long transfer because it needs multiple sentences to train components of the model for document-level classification. Cross Length Transfer ::: Possible Existing Solutions ::: Weak Supervision A simple yet possible solution for short to long transfer is a three-step approach where we (1) cluster the short texts into several long texts, (2) infer the class labels of the clusters, and (3) use the labeled clusters as weak supervision to create a classifier. Micro Aspect Sentiment Model BIBREF33;amplayo2017aspect does (1) and (2) automatically. For (3), we can train a classifier such as CNNs BIBREF0 to predict labels of long texts. One critical issue of this solution is that since both clustering and class labels are inferred, there is a high chance that at least one of them is incorrect. This thus creates compounding errors that decrease the performance of the model. Our Models ::: BaggedCNN: A Strong Baseline We present BaggedCNN, a simple yet strong baseline to the CLT task. BaggedCNN is a model derived from MILNet BIBREF31. MILNet uses CNN to encode segments, BiGRU BIBREF34 to calculate attention weights, and gated polarity to calculate document-level probabilities. We refer the readers to the original paper for more details. We improve using two key modifications: (a) removing the sequential connections (i.e., BiGRU) between segments, and (b) using a single classifier for both the segments and full document. For each document divided into segments $D=\lbrace S_i\rbrace $, BaggedCNN starts by encoding the segments using a CNN classifier called $\text{CNN}_{bag}$. Then, we pool the segment encodings into one vector using attention mechanism. Finally, we use a logistic regression classifier that can be used to classify either the segments or the document. This is possible since the vectors of both segments and document are in the same vector space: The model is trained differently depending on the transfer task: For long to short transfer, we minimize the cross-entropy loss between the actual and predicted class of the document $\mathcal {L}_d$. For short to long transfer, we minimize the mean cross entropy loss between the actual and predicted class of the segments $\sum \mathcal {L}_{s_i}/n, 1\le i \le n$. Note that BaggedCNN is reduced to a model where average pooling is done instead of the attention mechanism. At test time, we use $y_d$ for classification. While it has been shown that removing the sequential structure in the document level (i.e., BiGRU in the case of MILNet) decreases the performance of the document classifier BIBREF35, BIBREF2, we argue that this removal is effective on the CLT task because of inter-segment independence. That is, sentences in the document are treated similar to short texts. We also show in our experiments that BaggedCNN performs better than MILNet. However, BaggedCNN still fails to consider two things. First, while the model relaxes the strong assumption on similarity between segments and short texts, by removing the sequential connections, most segments cannot be treated as stand-alone short texts. For example, the segment “Yet it is salty.” is not a stand-alone short review. Second, when doing short to long transfer, the input short text is just one segment, thus the model is reduced into a weaker hierarchical CNN classifier. Our Models ::: LeTraNets: Length Transfer Networks We improve BaggedCNN by proposing a model called Length Transfer Networks (LeTraNets), as shown in Figure FIGREF16. LeTraNets is composed of two classifiers: a stand-alone CNN classifier with text encoder $\text{CNN}_{lone}$, and BaggedCNN, which includes a segment-level text encoder $\text{CNN}_{bag}$. The $\text{CNN}_{lone}$ encoder is used to capture holistic textual features, while the $\text{CNN}_{bag}$ encoder is used to capture segment-level textual features, assuming there is a bigger text that owns the segments. For each data instance, LeTraNets accepts two kinds of inputs: a long text $D={w_d}$ and a set of short texts $S={{w_{s_0}},...,{w_{s_n}}}$. However, the task setting only provides either one of long texts or short texts as input. We thus create pseudo-texts from the available text data through the following methods. In the long to short transfer task, we use segments in long texts as pseudo-short texts, as used in BaggedCNN. In the short to long transfer task, we concatenate a random number of short texts to create pseudo-long texts. The latter amounts to a possibly infinite number of long texts we can use for training. The short texts are encoded by both $\text{CNN}_{lone}$ and $\text{CNN}_{bag}$ as $s^{\lbrace l\rbrace }_i$ and $s^{\lbrace b\rbrace }_i$. The long texts are encoded using both $\text{CNN}_{lone}$ and BaggedCNN as $d^{\lbrace l\rbrace }$ and $d^{\lbrace b\rbrace }$: The encoded long text vectors $d^{\lbrace l\rbrace }$ and $d^{\lbrace b\rbrace }$ and short text vectors $s^{\lbrace b\rbrace }_i$ and $s^{\lbrace l\rbrace }_i$ are used to classify their labels using softmax classifiers specific to the CNN encoders: Our Models ::: LeTraNets: Length Transfer Networks ::: Training Mechanisms There are two main issues when training the model in the CLT setting. First, both the stand-alone CNN classifier and BaggedCNN are disconnected, acting as two individual classifiers. Second, the model needs both labels for both short and long text data, but we are only given labels for one kind of data during training for each transfer setting. Solving the second issue is crucial for short to long transfer, as we cannot train the full model if we do not have labels for long data. To this end, we use three training mechanisms below that help mitigate these issues. We connect them on different levels. In the word-level, we use the same word embedding space for both classifiers. Beyond word-level, we use a training mechanism called Joint Training (JT). This concatenates the encoded text vectors, and creates another logistic regression classifier for the concatenated vector. This creates a connection between classifiers at the classification-level. Beyond word-level, we introduce Prediction Regularization (PR) mechanism to train encoders with no labels. This regularizes the predictions of a weaker classifier based on the predictions of a stronger classifier. We consider BaggedCNN as the stronger classifier for long to short transfer, and $\text{CNN}_{lone}$ as the stronger classifier for short to long transfer. We use Kullback-Leibler divergence as the regularization function. Finally, using the PR mechanism directly might not work because predictions from the stronger classifier may not be optimized yet. Hence, we use Stepwise Pretraining (SP) mechanism to pretrain specific parts of the model in a step-by-step fashion. First, we pretrain the stronger classifier, then the weaker classifier with PR mechanism, and finally the classifier of the JT mechanism. After pretraining, we train the full model. The training configurations are different depending on the transfer task, which is also shown in Figure FIGREF16. For long to short transfer, we use $p(y^{\lbrace j\rbrace }_{d})$ for the JT mechanism and $R_d$ for the PR mechanism. For short to long transfer, we use $p(y^{\lbrace j\rbrace }_{s_i})$ for the JT mechanism and $R_s$ for the PR mechanism. The final training objective is to minimize the loss function, depending on the text length: where $\mathcal {L}^{\lbrace a\rbrace }_x$ is the cross-entropy loss between the actual and predicted values of the classifier $p(y^{\lbrace a\rbrace }_x)$, and $\lambda $ is tuned using a development set. At test time, we use $p(y^{\lbrace j\rbrace }_{d})$ and $p(y^{\lbrace j\rbrace }_{s_i})$ to classify the sentiment for long to short and short to long transfer, respectively. Experiments ::: Experimental Settings The dimensions of word vectors are set to 300. We use pre-trained GloVe embeddings BIBREF36 to initialize our English word vectors, and pre-trained FastText embeddings BIBREF37 to initialize our Korean word vectors. For all CNNs, we set $h=3,4,5$, each with 100 feature maps, following BIBREF0. We use dropout BIBREF38 on all non-linear connections with a dropout rate of 0.5. We set the batch size to 32. We use stochastic gradient descent over shuffled mini-batches with the Adadelta update rule BIBREF39 with $l_2$ constraint of 3. We experiment with a 5-fold cross-validation on the given source training set and report the average results. Experiments ::: Comparison Models We compare our models with the models from similar tasks as discussed in Section SECREF10. Specifically, we compare with (a) Cross Domain Transfer (CDT) models SCL BIBREF9 and NeuSCL BIBREF23, (b) CDT models with a CNN classifier integration BIBREF16 (SCL+CNN and NeuSCL+CNN), (c) a multiple-instance learning (MIL) model MILNet BIBREF31, (d) a weakly supervised model MASM+CNN BIBREF21. We remind that MILNet is only applicable to long to short transfer, and MASM+CNN is only applicable to short to long transfer. We use the available code provided by previous authors. Finally, we also compare with CNN BIBREF0, and a combination of two CNNs (CNNx2) as no-transfer baselines. Experiments ::: Dataset and Evaluation We use the datasets described in Table TABREF9 for all our experiments. We use the following evaluation metrics. For all datasets, we use accuracy (Acc) to measure the overall sentiment classification performance. Additionally, for fine-grained datasets, we use root mean squared error (RMSE) to measure the divergence between the predicted and ground truth sentiment scores. Finally, in order to compare models in an integrated manner, we report the average transfer ratio BIBREF40, a version of the transfer loss which is more adaptive to averaging, calculated as the average quotient between the transfer error and the in-domain baseline error, i.e. $\text{TR} = \sum _x e(\mathcal {S}_x,\mathcal {T}_x) / e_b(\mathcal {T}_x,\mathcal {T}_x)$, where $\mathcal {S}_x$ and $\mathcal {T}_x$ are the source and domain of dataset $x$, respectively, $e$ and $e_b$ are accuracy errors from the competing model and the baseline CNN model. Experiments ::: Long to Short Transfer We show the results for long to short transfer in the first part of Table TABREF20. Results show that Cross Domain Transfer models do not perform well, which confirms our hypothesis that they are not well suited for this task. MILNet performs well on polarity tasks, but performs poorly on fine-grained tasks, having worse performance than the no-transfer CNN baseline. This shows that although Multiple Instance Learning models are effective in classifying positive or negative sentiments, they are not flexible to fine-grained sentiment intensities, which differs when text lengths are different. On the other hand, BaggedCNN performs better than MILNet, proving that simplifying MIL models work well on CLT. Overall, LeTraNets performs the best among all models, having the best accuracies and RMSEs on all datasets and settings. Experiments ::: Short to Long Transfer We report the results for short to long transfer in the second part of Table TABREF20. Results show that Cross Domain Transfer models perform much worse compared to their performance in the long to short transfer task. The weak supervised model MASM+CNN performs the worst, having worse results than the no-transfer CNN baseline on all datasets. BaggedCNN also performs well in this task, even though it does not use its attention mechanism. This shows that BaggedCNN is a very tough-to-beat baseline for the CLT task. Finally, LeTraNets also outperforms all the models on this subtask. Experiments ::: Transfer Ratio (TR) Figure FIGREF30 shows the average transfer ratio (TR) of all competing models, where $\text{TR}=1$ means trivially transferable. The figure shows that the CDT models SCL and NeuSCL both obtain a larger transfer ratio compared to the no-transfer CNN baseline. The transfer ratios improve when CNN is integrated into both models, but does not improve much from the baseline. MILNet and BaggedCNN perform comparably on the long to short transfer task, where BaggedCNN performs slightly better. LeTraNets performs the best among the models, having transfer ratios less than 1.1. Analyses ::: Ablation on Training Mechanisms We investigate the performance of LeTraNets when the training mechanisms are not used. Specifically, we perform ablation tests on the Joint Training (JT), Prediction Regularization (PR), and Stepwise Pretraining (SP) mechanisms. The results in Table TABREF31 show that LeTraNets performs the best when all training mechanisms are used. Also, when used individually, all the training mechanisms boost up the performance of the model. Hence, we confirm that the training mechanisms help LeTraNets achieve good performance on the task. Analyses ::: Performance per Text Length We check the capability of LeTraNets to transfer across text lengths, by looking at its performance as the text length increases. Specifically, we compare the performance per text length of LeTraNets and CNN models, trained on either short texts (LeTraNetsshort and CNNshort) or long texts (LeTraNetslong and CNNlong), on Res_en short/long datasets. Figure FIGREF34 shows the results. CNN performs well when the text length is similar to the training dataset and performs poorly otherwise. LeTraNets, however, performs similarly on all kinds of text lengths although it is trained purely on a dataset of a specific length. More interestingly, LeTraNetsshort performs better than LeTraNetslong on longer texts, and unexpectedly performs worse on shorter texts. This suggests that LeTraNets weakens its ability to classify texts with the same length and improves its ability to classify texts with different length. This property is acceptable in our problem setup since we care on effectively classifying short (or long) texts more, assuming we only have access to long (or short) texts as training data. However, future work should explore on CLT models that perform well on both text lengths. Analyses ::: On Topic Diversity Longer texts can discuss diverse topics, while shorter texts are limited to few (or one) topics. In the sentiment classification domain, longer reviews may mention positive sentiments towards an aspect of a product, and then talk about negative sentiments towards another aspect. With this hypothesis, we examine whether LeTraNets can handle longer texts with diverse topics when trained on short texts. Specifically, we compare the performance per topic diversity of LeTraNets and CNN models, trained on short texts of Res_en dataset. We measure topic diversity as the Shannon index BIBREF41 of the topic distribution inferred by an LDA topic model BIBREF42 fit using the unlabeled data. Figure FIGREF36 shows the results. Results indicate that the performance increase of LeTraNets over CNN increases as the diversity of topics increases. This shows that for short to long transfer, LeTraNets is able to handle texts with topics that are more diverse, even when trained on short texts, which tend to have less diverse topics. Analyses ::: Cross Domain and Length Transfer Which between domain and text length should we consider to achieve a better performance? To answer this question, we combine Cross Domain Transfer (CDT) and Cross Length Transfer (CLT) into one task: Cross Domain and Length Transfer (CDLT) and compare the performance of CDT and CLT models on the task. We use the Mov_en and Res_en datasets to create four CDLT datasets, and check which between the CDT model NeuSCL+CNN and the CLT model LeTraNets achieves a higher increase in performance. The results are shown in Table TABREF38. We find that NeuSCL+CNN performs worse, obtaining accuracies worse than that of the no-transfer CNN baseline. LeTraNets performs better, obtaining significant increase in performance from the baseline. This shows that solving the non-transferability of length is more important to achieve a more effective sentiment classifier. Conclusions We defined a new task called Cross Length Transfer (CLT) to check the transferability across lengths of classification models. We set the grounds by defining the task, providing three benchmark datasets from different domains and languages, and introducing models from related tasks. We proposed two models: a strong baseline model called BaggedCNN, and LeTraNets, a model that improves over the weakness of BaggedCNN. Our multiple experiments show that LeTraNets demonstrates superior performance over all competing models. We aim to apply the CLT to other classification tasks, such as natural language inference BIBREF43, where text length is influential towards overall model performance BIBREF44. Acknowledgments We would like to thank the anonymous reviewers for their helpful feedback and suggestions. Amplayo is grateful to be supported by a Google PhD Fellowship. This research was supported by MSIT (Ministry of Science and ICT), Korea, under ITRC program (IITP-2019-2016-0-00464) supervised by IITP. Hwang is the corresponding author.
movies , restaurants, English , Korean
280f863cfd63b711980ca6c7f1409c0306473de7
280f863cfd63b711980ca6c7f1409c0306473de7_0
Q: Are their corpus and software public? Text: Introduction In this paper, we study semantic role labelling (SRL), a subtask of semantic parsing of natural language sentences. SRL is the task of identifying semantic roles of arguments of each predicate in a sentence. In particular, it answers a question Who did what to whom, when, where, why?. For each predicate in a sentence, the goal is to identify all constituents that fill a semantic role, and to determine their roles, such as agent, patient, or instrument, and their adjuncts, such as locative, temporal or manner. Figure 1 shows the SRL of a simple Vietnamese sentence. In this example, the arguments of the predicate giúp (helped) are labelled with their semantic roles. The meaning of the labels will be described in detail in Section "Building a Vietnamese PropBank" . SRL has been used in many natural language processing (NLP) applications such as question answering BIBREF0 , machine translation BIBREF1 , document summarization BIBREF2 and information extraction BIBREF3 . Therefore, SRL is an important task in NLP. The first SRL system was developed by Gildea and Jurafsky BIBREF4 . This system was performed on the English FrameNet corpus. Since then, SRL task has been widely studied by the NLP community. In particular, there have been two shared-tasks, CoNLL-2004 BIBREF5 and CoNLL-2005 BIBREF6 , focusing on SRL task for English. Most of the systems participating in these shared-tasks treated this problem as a classification problem which can be solved by supervised machine learning techniques. There exists also several systems for other well-studied languages like Chinese BIBREF7 or Japanese BIBREF8 . This paper covers not only the contents of two works published in conference proceedings BIBREF9 (in Vietnamese) and BIBREF10 on the construction and the evaluation of a first SRL system for Vietnamese, but also an extended investigation of techniques used in SRL. More concretely, the use of integer linear programming inference procedure and distributed word representations in our semantic role labelling system, which leads to improved results over our previous work, as well as a more elaborate evaluation are new for this article. Our system includes two main components, a SRL corpus and a SRL software which is thoroughly evaluated. We employ the same development methodology of the English PropBank to build a SRL corpus for Vietnamese containing a large number of syntactically parsed sentences with predicate-argument structures. We then use this SRL corpus and supervised machine learning models to develop a SRL software for Vietnamese. We demonstrate that a simple application of SRL techniques developed for English or other languages could not give a good accuracy for Vietnamese. In particular, in the constituent identification step, the widely used 1-1 node-mapping algorithm for extracting argument candidates performs poorly on the Vietnamese dataset, having $F_1$ score of 35.93%. We thus introduce a new algorithm for extracting candidates, which is much more accurate, achieving an $F_1$ score of 84.08%. In the classification step, in addition to the common linguistic features, we propose novel and useful features for use in SRL, including function tags and distributed word representations. These features are employed in two statistical classification models, maximum entropy and support vector machines, which are proved to be good at many classification problems. In order to incorporate important grammatical constraints into the system to improve further the performance, we combine machine learning techniques with an inference procedure based on integer linear programming. Finally, we use distributed word representations produced by two recent unsupervised models, the Skip-gram model and the GloVe model, on a large corpus to alleviate the data sparseness problem. These word embeddings help our SRL software system generalize well on unseen words. Our final system achieves an $F_1$ score of 74.77% on a test corpus. This system, including corpus and software, is available as an open source project for free research and we believe that it is a good baseline for the development of future Vietnamese SRL systems. The remainder of this paper is structured as follows. Section "Existing English SRL Corpora" describes the construction of a SRL corpus for Vietnamese. Section "Vietnamese SRL System" presents the development of a SRL software, including the methodologies of existing systems and of our system. Section "Evaluation" presents the evaluation results and discussion. Finally, Section "Conclusion" concludes the paper and suggests some directions for future work. Vietnamese SRL Corpus Like many other problems in NLP, annotated corpora are essential for statistical learning as well as evaluation of SRL systems. In this section, we start with an introduction of existing English SRL corpora. Then we present our work on the construction of the first reference SRL corpus for Vietnamese. Existing English SRL Corpora The FrameNet project is a lexical database of English. It was built by annotating examples of how words are used in actual texts. It consists of more than 10,000 word senses, most of them with annotated examples that show the meaning and usage and more than 170,000 manually annotated sentences BIBREF11 . This is the most widely used dataset upon which SRL systems for English have been developed and tested. FrameNet is based on the Frame Semantics theory BIBREF12 . The basic idea is that the meanings of most words can be best understood on the basis of a semantic frame: a description of a type of event, relation, or entity and the participants in it. All members in semantic frames are called frame elements. For example, a sentence in FrameNet is annotated in cooking concept as shown in Figure 2 . PropBank is a corpus that is annotated with verbal propositions and their arguments BIBREF13 . PropBank tries to supply a general purpose labelling of semantic roles for a large corpus to support the training of automatic semantic role labelling systems. However, defining such a universal set of semantic roles for all types of predicates is a difficult task; therefore, only Arg0 and Arg1 semantic roles can be generalized. In addition to the core roles, PropBank defines several adjunct roles that can apply to any verb. It is called Argument Modifier. The semantic roles covered by the PropBank are the following: Core Arguments (Arg0-Arg5, ArgA): Arguments define predicate specific roles. Their semantics depend on predicates in the sentence. Adjunct Arguments (ArgM-*): General arguments that can belong to any predicate. There are 13 types of adjuncts. Reference Arguments (R-*): Arguments represent arguments realized in other parts of the sentence. Predicate (V): Participant realizing the verb of the proposition. For example, the sentence of Figure 2 can be annotated in the PropBank role schema as shown in Figure 3 . The English PropBank methodology is currently implemented for a wide variety of languages such as Chinese, Arabic or Hindi with the aim of creating parallel PropBanks. This SRL resource has a great impact on many natural language processing tasks and applications. VerbNet is a verb lexicon of English, which was developed by Karin Kipper-Schuler and colleagues BIBREF14 . It contains more than 5800 English verbs, which are classified into 270 groups, according to the verb classification method of Beth Levin BIBREF15 . In this approach, the behavior of a verb is mostly determined by its meaning. Once classified into groups, each verb group is added semantic roles. VerbNet has 23 semantic roles, for example Actor, the participant that is the investigator of an event. Agent, the actor in an event who initiates and carries out the event and who exists independently of the event. Attribute, the undergoer that is a property of an entity or entities. Destination, the goal that is a concrete, physical location. These syntactic roles normally answer who, what, when and how questions. A SRL annotation guidelines of this project is available online . In summary, SRL corpora have been constructed for English and other well-resourced languages. They are important resources which are very useful for many natural language processing applications. For the Vietnamese language, there has not existed any SRL corpus which with a similar level like those of English corpora described above. In the following sections, we report our initiatives for constructing and evaluating a SRL corpus for Vietnamese. Building a Vietnamese PropBank In this section, we present the construction of a Vietnamese SRL corpus, which is referred as Vietnamese PropBank hereafter. We first describe annotation guidelines and then describe the SRL corpus which has been developed. The determination of semantic roles in the Vietnamese language is a difficult problem and it has been investigated with different opinions. In general, Vietnamese linguists have not reached a consensus on a list of semantic roles for the language. Different linguists proposed different lists; some used the same name but with different meaning of a role, or different names having the same meaning. Nevertheless, one can use an important principle for determining semantic roles: "Semantic role is the actual role a participant plays in some situation and it always depends on the nature of that situation" BIBREF16 . This means that when identifying the meaning of a phrase or of a sentence, one must not separate it out of the underlying situation that it appears. While there might be some controversy about the exact semantic role names should be, one can list common semantic roles which have been accepted by most of Vietnamese linguists BIBREF17 . The syntactic sub-categorization frames are closely related to the verb meanings. That is, the meaning of a sentence can be captured by the subcategorization frame of the verb predicate. In consequence, the sentence meaning can be described by labelling the semantic roles for each participant in the sub-categorization frame of the predicate. This approach is adopted by many Vietnamese linguists and different semantic roles set have been proposed. For example, Cao Xuân Hạo BIBREF16 makes use of the argument (obligatory participants) roles as agent, actor, processed, force, carrier, patient, experiencer, goal, etc., while Diệp Quang Ban BIBREF18 makes use of fact categories: dynamic, static, mental, existential, verbal, relational etc. For adjuncts (optional participants), Cao Xuân Hạo uses the roles: manner, mean, result, path, etc., while Diệp Quang Ban makes use of circumstance types: time, space, cause, condition, goal, result, path, etc. In this work, we took a pragmatic standpoint during the design of a semantic role tagset and focused our attention on the SRL categories that we expect to be most necessary and useful in practical applications. We have constructed a semantic role tagset based on two following principles: The semantic roles are well-defined and commonly accepted by the Vietnamese linguist community. The semantic roles are comparable to those of the English PropBank corpus, which make them helpful and advantageous for constructing multi-lingual corpora and applications in later steps. Furthermore, it seems fairly indisputable that there are structural and semantic correspondences accross languages. We have selected a SRL tagset which is basically similar to that of the PropBank. However, some roles are made more fine-grained accounting for idiosyncratic properties of the Vietnamese language. In addition, some new roles are added to better distinguish predicate arguments when the predicate is an adjective, a numeral, a noun or a preposition, which is a common phenomenon in Vietnamese besides the popular verbal predicate. The following paragraph describes some semantic roles of predicative arguments where the predicate is a verb: Arg0: The agent semantic role representing a person or thing who is the doer of an event. For example, $ [0.5px]{\text{Nam}}_{\mathtt {Arg0}} \text{ đến trường} \; (\text{Nam goes to school.}) $ Arg0-Identified and Arg1-Identifier: The semantic roles representing identified entity and identifier respectively, normally used with the copula “là”. For example, $ [0.5px]{\text{Cầu thủ giỏi nhất ở đây}}_{\mathtt {Arg0-Identified}} \text{ là } [0.5px]{\text{anh ấy}}_{\mathtt {Arg1-Identifier}} $ $ (\text{He is the best player here.})$ Arg1-Patient: The semantic role which is the surface object of a predicate indicating the person or thing affected. For example, $ \text{Bộ đội phá } [0.5px]{\text{cầu}}_{\texttt {Arg1-Patient}} \; $ $(\text{The soldiers broke a bridge.})$ Arg2: The semantic role of a beneficiary indicating a referent who is advantaged or disadvantaged by an event. For example, $ \text{Nó chữa cái xe cho } [0.5px]{\text{chị ấy}}_{\mathtt {Arg2}} \; $ $(\text{He repaired a bike for her.})$ Figure 4 presents an example of the SRL analysis of a syntactically bracketed sentence “Ba đứa con anh đã có việc làm ổn định.” (His three children have had a permanent job.). The semantic roles of this sentence include: Arg0: “ba đứa con anh” (his three children) is the agent ArgM-TMP: “đã” is a temporal modifier Rel: “có” (have) is the predicate Arg1: “việc làm ổn định” (a permanent job) is the patient. Once the SRL annotation guidelines have been designed, we built a Vietnamese SRL corpus by following two main steps. In the first step, we proposed a set of conversion rules to convert automatically a syntactically annotated treebank containing 10,000 manually annotated sentences (the VietTreeBank) to a coarse-grained SRL annotated corpus. The Vietnamese treebank is one result of a national project which aims to develop basic resources and tools for Vietnamese language and speech processing. The raw texts of the treebank are collected from the social and political sections of the Youth online daily newspaper. The corpus is divided into three sets corresponding to three annotation levels: word-segmented, part-of-speech-tagged and syntax-annotated set. The syntax-annotated corpus, a subset of the part-of-speech-tagged set, is currently composed of $10,471$ sentences ( $225,085$ tokens). Sentences range from 2 to 105 words, with an average length of $21.75$ words. There are $9,314$ sentences of length 40 words or less. The tagset of the treebank has 38 syntactic labels (18 part-of-speech tags, 17 syntactic category tags, 3 empty categories) and 17 function tags. For details, please refer to BIBREF19 . The meanings of some common tags are listed in Table 1 . The coarse-grained semantic role tagset contains 24 role names which are all based on the main roles of the PropBank. We carefully investigated the tagset of the VietTreeBank based on detailed guidelines of constituency structures, phrasal types, functional tags, clauses, parts-of-speech and adverbial functional tagset to propose a set of rules for determining high-level semantic roles. Some rules for coarse-grained annotation are shown in Table 2 . Each rule is used to determine a semantic role for a phrase of a sentence. As an example, consider the constituency analysis of a sentence in the VietTreeBank “Kia là những ngôi nhà vách đất.” (Over there are soil-wall houses.) (S (NP-SUB (P-H Kia)) (VP (V-H là) (NP (L nhïng) (Nc-H ngôi) (N nhà) (NP (N-H vách) (N ¥t)))) (. .)) First, using the annotation rule for Arg0, the phrase having syntactical function SUB or preceding the predicate of the sentence, we can annotate the semantic role Arg0 for the word “Kia”. The predicate “là” is annotated with semantic role REL. Finally, the noun phrase following the predicate “những ngôi nhà vách đất” is annotated with Arg1. In the second step, we developed a software to help a team of Vietnamese linguists manually revise and annotate the converted corpus with fine-grained semantic roles. The software is web-based, friendly and easy for correction and edition of multiple linguists. In addition, it also permits a collaborative work where any edition at sentence level is versionized and logged with meta-information so as to facilitate cross validation and discussion between linguists if necessary. We have completed the semantic role annotation of 5,460 sentences of the VietTreeBank, covering 7,525 verbal and adjectival predicatives. The annotation guidelines as well as the current SRL corpus are published as open resources for free research. In the next section, we present our effort in developing a SRL software system for Vietnamese which is constructed and evaluated on this SRL corpus. Existing Approaches This section gives a brief survey of common approaches which are used by many existing SRL systems of well-studied languages. These systems are investigated in two aspects: (a) the data type that the systems use and (b) their approaches for labelling semantic roles, including model types, labelling strategies, degrees of granularity and post-processing. The input data of a SRL system are typically syntactically parsed sentences. There are two common syntactic representations namely bracketed trees and dependency trees. Some systems use bracketed trees of sentences as input data. A bracketed tree of a sentence is the tree of nested constituents representing its constituency structure. Some systems use dependency trees of a sentence, which represents dependencies between individual words of a sentence. The syntactic dependency represents the fact that the presence of a word is licensed by another word which is its governor. In a typed dependency analysis, grammatical labels are added to the dependencies to mark their grammatical relations, for example nominal subject (nsubj) or direct object (dobj). Figure 5 shows the bracketed tree and the dependency tree of an example sentence. The first step of a SRL system is to extract constituents that are more likely to be arguments or parts of arguments. This step is called argument candidate extraction. Most of SRL systems for English use 1-1 node mapping method to find candidates. This method searches all nodes in a parse tree and maps constituents and arguments. Many systems use a pruning strategy on bracketed trees to better identify argument candidates BIBREF7 . In a second step, each argument candidate is labelled with a semantic role. Every SRL system has a classification model which can be classified into two types, independent model or joint model. While an independent model decides the label of each argument candidate independently of other candidates, a joint model finds the best overall labelling for all candidates in the sentence at the same time. Independent models are fast but are prone to inconsistencies such as argument overlap, argument repetition or argument missing. For example, Figure 6 shows some examples of these inconsistencies when analyzing the Vietnamese sentence Do học chăm, Nam đã đạt thành tích cao (By studying hard, Nam got a high achievement). Strategies for labelling semantic roles are diverse, but they can be classified into three main strategies. Most of the systems use a two-step approach consisting of identification and classification BIBREF20 , BIBREF21 . The first step identifies arguments from many candidates, which is essentially a binary classification problem. The second step classifies the identified arguments into particular semantic roles. Some systems use a single classification step by adding a “null” label into semantic roles, denoting that this is not an argument BIBREF22 . Other systems consider SRL as a sequence tagging problem BIBREF23 , BIBREF24 . Existing SRL systems use different degrees of granularity when considering constituents. Some systems use individual words as their input and perform sequence tagging to identify arguments. This method is called word-by-word (W-by-W) approach. Other systems use syntactic phrases as input constituents. This method is called constituent-by-constituent (C-by-C) approach. Compared to the W-by-W approach, C-by-C approach has two main advantages. First, phrase boundaries are usually consistent with argument boundaries. Second, C-by-C approach allows us to work with larger contexts due to a smaller number of candidates in comparison to the W-by-W approach. Figure 7 presents an example of C-by-C and W-by-W approaches. To improve the final result, some systems use post-processing to correct argument labels. Common post-processing methods include re-ranking, Viterbi search and integer linear programming (ILP). Our Approach The previous subsection has reviewed existing techniques for SRL which have been published so far for well-studied languages. In this section, we first show that these techniques per se cannot give a good result for Vietnamese SRL, due to some inherent difficulties, both in terms of language characteristics and of the available corpus. We then develop a new algorithm for extracting candidate constituents for use in the identification step. Some difficulties of Vietnamese SRL are related to its SRL corpus. As presented in the previous section, this SRL corpus has 5,460 annotated sentences, which is much smaller than SRL corpora of other languages. For example, the English PropBank contains about 50,000 sentences, which is about ten times larger. While smaller in size, the Vietnamese PropBank has more semantic roles than the English PropBank has – 28 roles compared to 21 roles. This makes the unavoidable data sparseness problem more severe for Vietnamese SRL than for English SRL. In addition, our extensive inspection and experiments on the Vietnamese PropBank have uncovered that this corpus has many annotation errors, largely due to encoding problems and inconsistencies in annotation. In many cases, we have to fix these annotation errors by ourselves. In other cases where only a proposition of a complex sentence is incorrectly annotated, we perform an automatic preprocessing procedure to drop it out, leave the correctly annotated propositions untouched. We finally come up with a corpus of 4,800 sentences which are semantic role annotated. A major difficulty of Vietnamese SRL is due to the nature of the language, where its linguistic characteristics are different from occidental languages BIBREF25 . We first try to apply the common node-mapping algorithm which is widely used in English SRL systems to the Vietnamese corpus. However, this application gives us a very poor performance. Therefore, in the identification step, we develop a new algorithm for extracting candidate constituents which is much more accurate for Vietnamese than the node-mapping algorithm. Details of experimental results will be provided in the Section "Evaluation" . In order to improve the accuracy of the classification step, and hence of our SRL system as a whole, we have integrated many useful features for use in two statistical classification models, namely Maximum Entropy (ME) and Support Vector Machines (SVM). On the one hand, we adapt the features which have been proved to be good for SRL of English. On the other hand, we propose some novel features, including function tags, predicate type and distance. Moreover, to improve further the performance of our system, we introduce some appropriate constraints and apply a post-processing method by using ILP. Finally, to better handle unseen words, we generalize the system by integrating distributed word representations. In the next paragraphs, we first present our constituent extraction algorithm to get inputs for the identification step and then the ILP post-processing method. Details of the features used in the classification step and the effect of distributed word representations in SRL will be presented in Section "Evaluation" . Our algorithm derives from the pruning algorithm for English BIBREF26 with some modifications. While the original algorithm collects sisters of the current node, our algorithm checks the condition whether or not children of each sister have the same phrase label and have different function label from their parent. If they have the same phrase labels and different function labels from their parent, our algorithm collects each of them as an argument candidate. Otherwise, their parent is collected as a candidate. In addition, we remove the constraint that does not collect coordinated nodes from the original algorithm. This algorithm aims to extract constituents from a bracketed tree which are associated to their corresponding predicates of the sentence. If the sentence has multiple predicates, multiple constituent sets corresponding to the predicates are extracted. The pseudo code of the algorithm is described in Algorithm UID53 . [!h] A bracketed tree $T$ and its predicate A tree with constituents for the predicate $currentNode \leftarrow predicateNode$ $currentNode \ne $ T.root() $S \in currentNode$ .sibling() $|S.children()| 1$ and $S.children().get(0).isPhrase()$ $sameType \leftarrow true$ $diffTag \leftarrow true$ $phraseType \leftarrow S.children().get(0).phraseType()$ $funcTag \leftarrow S.children().get(0).functionTag()$ $i\leftarrow 1$ $|S.children()| - 1$ $S.children().get(i).phraseType() \ne phraseType$ $sameType \leftarrow false$ break $S.children().get(i).functionTag() = funcTag$ $diffTag \leftarrow false$ break $sameType$ and $diffTag$ $child \in S.children()$ $T.collect(child)$ $T.collect(S)$ $currentNode \leftarrow currentNode.parent()$ return $T$ Constituent Extraction Algorithm This algorithm uses several simple functions. The $root()$ function gets the root of a tree. The $children()$ function gets the children of a node. The $sibling()$ function gets the sisters of a node. The $isPhrase()$ function checks whether a node is of phrasal type or not. The $phraseType()$ function and $functionTag()$ function extracts the phrase type and function tag of a node, respectively. Finally, the $collect(node)$ function collects words from leaves of the subtree rooted at a node and creates a constituent. Figure 8 shows an example of running the algorithm on a sentence Bà nói nó là con trai tôi mà (You said that he is my son). First, we find the current predicate node V-H là (is). The current node has only one sibling NP node. This NP node has three children where some of them have different labels from their parents, so this node and its associated words are collected. After that, we set current node to its parent and repeat the process until reaching the root of the tree. Finally, we obtain a tree with the following constituents for predicate là: Bà, nói, nó, and con trai tôi mà. Because the system classifies arguments independently, labels assigned to arguments in a sentence may violate Vietnamese grammatical constraints. To prevent such violation and improve the result, we propose a post-processing process which finds the best global assignment that also satisfies grammatical constraints. Our work is based on the ILP method of English PropBank BIBREF27 . Some constraints that are unique to Vietnamese are also introduced and incorporated. Integer programs are almost identical to linear programs. The cost function and the constraints are all in linear form. The only difference is that the variables in ILP can only take integer values. A general binary ILP can be stated as follows. Given a cost vector $\vec{p} \in \mathbb {R}^{d}$ , a set of variables $\vec{z} = (z_1,\dots , z_d) \in \mathbb {R}^d$ , and cost matrices $\mathbf {C}_{1} \in \mathbb {R}^{t_{1}}\times \mathbb {R}^{d}$ , $\mathbf {C}_{2} \in \mathbb {R}^{t_{2}}\times \mathbb {R}^{d}$ , where $t_1, t_2$ are the number of inequality and equality constraints and $d$ is the number of binary variables. The ILP solution $\hat{\vec{z}}$ is the vector that maximizes the cost function: $$\hat{\vec{z}} = \underset{\vec{z} \in 0,1^{d}}{\operatorname{argmax}}\; \vec{p} \cdot \vec{z} \quad \text{ subject to } {\left\lbrace \begin{array}{ll} \mathbf {C}_{1}\vec{z} \ge \vec{b}_{1}\\ \mathbf {C}_{2}\vec{z} = \vec{b}_{2} \end{array}\right.}$$ (Eq. 60) where $\vec{b}_1, \vec{b}_2 \in \mathbb {R}^d$ . Our system attempts to find exact roles for argument candidate set for each sentence. This set is denoted as $S^{1:M}$ , where the index ranged from 1 to $M$ ; and the argument role set is denoted as $\mathcal {P}$ . Assuming that the classifier returns a score, $score(S^{i}=c^{i})$ , corresponding to the likelihood of assigning label $c^{i}$ to argument $S^{i}$ . The aim of the system is to find the maximal overall score of the arguments: $$\hat{c}^{1:M} = \underset{c^{1:M} \in \mathcal {P}^M}{\operatorname{argmax}}\; score(S^{1:M} = c^{1:M})$$ (Eq. 61) $$= \underset{c^{1:M} \in \mathcal {P}^M}{\operatorname{argmax}}\; \sum ^{M}_{i=1} score(S^{i}=c^{i})$$ (Eq. 62) In this paragraph, we propose a constraint set for our SRL system. Some of them are directly inspired and derived from results for English SRL, others are constraints that we specify uniquely to account for Vietnamese specificities. The constraint set includes: One argument can take only one type. Arguments cannot overlap with the predicate in the sentence. Arguments cannot overlap other arguments in the sentence. There is no duplicating argument phenomenon for core arguments in the sentence. If the predicate is not verb type, there are only 2 types of core argument Arg0 and Arg1. In particular, constraints from 1 to 4 are derived from the ILP method for English BIBREF27 , while constraint 5 is designed specifically for Vietnamese. To find the best overall labelling satisfying these constraints, we transform our system to an ILP problem. First, let $z_{ic} = [S^{i} = c]$ be the binary variable that shows whether or not $S^{i}$ is labelled argument type $c$ . We denote $p_{ic} = score(S^{i}=c)$ . The objective function of the optimization problem can be written as: $$\underset{z \in {0,1}}{\operatorname{argmax}}\; \sum _{i=1}^{M}\sum _{c=1}^{|\mathcal {P}|}p_{ic}z_{ic}.$$ (Eq. 70) Next, each constraint proposed above can be reformulated as follows: One argument can take only one type. $$\sum _{c=1}^{|\mathcal {P}|}z_{ic}=1, \quad \forall i \in [1,M].$$ (Eq. 72) Arguments cannot overlap with the predicate in the sentence. Arguments cannot overlap other arguments in the sentence. If there are $k$ arguments $S^{1},S^{2},...,S^{k}$ that appear in a same word in the sentence, we can conclude that there are at least $k-1$ arguments that are classified as “null”: $$\sum _{i=1}^{k}z_{ic} \ge k-1 \quad (c = \text{``null''}).$$ (Eq. 75) This constraint has been satisfied by our constituent extraction approach. Thus, we do not need to add this constraint in the post-processing step if the constituent extraction algorithm has been used. There is no duplicating argument phenomenon for core arguments in the sentence. $$\begin{split} \sum _{i=1}^{M}z_{ic} \le 1, \\ \forall c \in \left\lbrace \text{Arg0}, \text{Arg1}, \text{Arg2}, \text{Arg3}, \text{Arg4} \right\rbrace . \end{split}$$ (Eq. 77) If the predicate is not verb type, there are only 2 types of core argument Arg0 and Arg1. $$\sum _{i=1}^{M}z_{ic}=0 \quad \forall c \in \left\lbrace \text{Arg2}, \text{Arg3}, \text{Arg4} \right\rbrace .$$ (Eq. 79) In the next section, we present experimental results, system evaluation and discussions. Evaluation In this section, we describe the evaluation of our SRL system. First, we first introduce two feature sets used in machine learning classifiers. Then, the evaluation results are presented and discussed. Next, we report the improved results by using integer linear programming inference method. Finally, we present the efficacy of distributed word representations in generalizing the system to unseen words. Feature Sets We use two feature sets in this study. The first one is composed of basic features which are commonly used in SRL system for English. This feature set is used in the SRL system of Gildea and Jurafsky BIBREF4 on the FrameNet corpus. This feature set consists of 6 feature templates, as follows: Phrase type: This is very useful feature in classifying semantic roles because different roles tend to have different syntactic categories. For example, in the sentence in Figure 8 Bà nói nó là con trai tôi mà, the phrase type of constituent nó is NP. Parse tree path: This feature captures the syntactic relation between a constituent and a predicate in a bracketed tree. This is the shortest path from a constituent node to a predicate node in the tree. We use either symbol $\uparrow $ or symbol $\downarrow $ to indicate the upward direction or the downward direction, respectively. For example, the parse tree path from constituent nó to the predicate là is NP $\uparrow $ S $\downarrow $ VP $\downarrow $ V. Position: Position is a binary feature that describes whether the constituent occurs after or before the predicate. It takes value 0 if the constituent appears before the predicate in the sentence or value 1 otherwise. For example, the position of constituent nó in Figure 8 is 0 since it appears before predicate là. Voice: Sometimes, the differentiation between active and passive voice is useful. For example, in an active sentence, the subject is usually an Arg0 while in a passive sentence, it is often an Arg1. Voice feature is also binary feature, taking value 1 for active voice or 0 for passive voice. The sentence in Figure 8 is of active voice, thus its voice feature value is 1. Head word: This is the first word of a phrase. For example, the head word for the phrase con trai tôi mà is con trai. Subcategorization: Subcategorization feature captures the tree that has the concerned predicate as its child. For example, in Figure 8 , the subcategorization of the predicate là is VP(V, NP). Preliminary investigations on the basic feature set give us a rather poor result. Therefore, we propose some novel features so as to improve the accuracy of the system. These features are as follows: Function tag: Function tag is a useful information, especially for classifying adjunct arguments. It determines a constituent's role, for example, the function tag of constituent nó is SUB, indicating that this has a subjective role. Distance: This feature records the length of the full parse tree path before pruning. For example, the distance from constituent nó to the predicate là is 3. Predicate type: Unlike in English, the type of predicates in Vietnamese is much more complicated. It is not only a verb, but is also a noun, an adjective, or a preposition. Therefore, we propose a new feature which captures predicate types. For example, the predicate type of the concerned predicate is V. Results and Discussions We use a 10-fold cross-validation method to evaluate our system. The final accuracy scores is the average scores of the 10 runs. The evaluation metrics are the precision, recall and $F_1$ -measure. The precision ( $P$ ) is the proportion of labelled arguments identified by the system which are correct; the recall ( $R$ ) is the proportion of labelled arguments in the gold results which are correctly identified by the system; and the $F_1$ -measure is the harmonic mean of $P$ and $R$ , that is $F_{1} = 2PR/(P+R)$ . In the first experiment, we compare our constituent extraction algorithm to the 1-1 node mapping and the pruning algorithm BIBREF27 . Table 3 shows the performance of two extraction algorithms. We see that our extraction algorithm outperforms significantly the 1-1 node mapping algorithm, in both of the precision and the recall ratios. It is also better than the pruning algorithm. In particular, the precision of the 1-1 node mapping algorithm is only 29.58%; it means that this method captures many candidates which are not arguments. In contrast, our algorithm is able to identify a large number of correct argument candidates, particularly with the recall ratio of 86.12% compared to 79.39% of the pruning algorithm. This result also shows that we cannot take for granted that a good algorithm for English could also work well for another language of different characteristics. In the second experiment, we continue to compare the performance of the two extraction algorithms, this time at the final classification step and get the baseline for Vietnamese SRL. The classifier we use in this experiment is a Support Vector Machine (SVM) classifier. Table 4 shows the accuracy of the baseline system. Once again, this result confirms that our algorithm achieves the better result. The $F_1$ of our baseline SRL system is 69.96%, compared to 40.66% of the 1-1 node mapping and 67.78% of the pruning system. This result can be explained by the fact that the 1-1 node mapping and the pruning algorithm have a low recall ratio, because it identifies incorrectly many argument candidates. In the third experiment, we compare two labelling strategies for Vietnamese SRL. In addition to the SVM classifier, we also try the Maximum Entropy (ME) classifier, which usually gives good accuracy in a wide variety of classification problems. Table 5 shows the $F_1$ scores of different labelling strategies. We see that the performance of SVM classifier is slightly better than the performance of ME classifier. The best accuracy is obtained by using 1-step strategy with SVM classifier. The current SRL system achieves an $F_1$ score of 69.96%. In the fourth experiment, we analyse and evaluate the impact of each individual feature to the accuracy of our system so as to find the best feature set for our Vietnamese SRL system. We start with the basic feature set presented previously, denoted by $\Phi _0$ and augment it with modified and new features as shown in Table 6 . The accuracy of these feature sets are shown in Table 7 . We notice that amongst the three features, function tag is the most important feature which increases the accuracy of the baseline feature set by about 4% of $F_1$ score. The distance feature also helps increase slightly the accuracy. We thus consider the fourth feature set $\Phi _4$ defined as $ \Phi _4 = \Phi _0 \cup \lbrace \text{Function Tag}\rbrace \cup \lbrace \text{Distance}\rbrace . $ In the fifth experiment, we investigate the significance of individual features to the system by removing them, one by one from the feature set $\Phi _4$ . By doing this, we can evaluate the importance of each feature to our overall system. The feature sets and their corresponding accuracy are presented in Table 8 and Table 9 respectively. We see that the accuracy increases slightly when the subcategorization feature ( $\Phi _{11}$ ) is removed. For this reason, we remove only the subcategorization feature. The best feature set includes the following features: predicate, phrase type, function tag, parse tree path, distance, voice, position and head word. The accuracy of our system with this feature set is 74.37% of $F_1$ score. As discussed previously, after classifying the arguments, we use ILP method to help improve the overall accuracy. In the sixth experiment, we set up an ILP to find the best performance satisfying constraints presented earlier. The score $p_{ic}=score(S^{i}=c)$ is the signed distance of that argument to the hyperplane. We also compare our ILP system with the ILP method for English by using only constraints from 1 to 4. The improvement given by ILP is shown in Table 10 . We see that ILP increases the performance of about 0.4% and when adding constraint 5, the result is slightly better. The accuracy of for each argument is shown in Table 11 . A detailed investigation of our constituent extraction algorithm reveals that it can account for about 86% of possible argument candidates. Although this coverage ratio is relatively high, it is not exhaustive. One natural question to ask is whether an exhaustive search of argument candidates could improve the accuracy of the system or not. Thus, in the seventh experiment, we replace our constituent extraction algorithm by an exhaustive search where all nodes of a syntactic tree are taken as possible argument candidates. Then, we add the third constraint to the ILP post-processing step as presented above (Arguments cannot overlap other arguments in the sentence). An accuracy comparison of two constituent extraction algorithms is shown in Table 12 . Taking all nodes of a syntactic tree help increase the number of candidate argument to a coverage ratio of 93.25%. However, it also proposes many wrong candidates as shown by a low precision ratio. Table 13 shows the accuracy of our system in the two candidate extraction approaches. We see that an exhaustive search of candidates help present more possible constituent candidates but it makes the performance of the system worse than the constituent extraction algorithm (69.39% compared to 74.73% of $F_1$ ratio). One plausible explanation is that the more a classifier has candidates to consider, the more it is likely to make wrong classification decision, which results in worse accuracy of the overall system. In addition, a large number of candidates makes the system lower to run. In our experiment, we see the training time increased fourfold when the exhaustive search approach was used instead of our constituent extraction algorithm. In the ninth experiment, we investigate the dependence of accuracy to the size of the training dataset. Figure 9 depicts the learning curve of our system when the data size is varied. It seems that the accuracy of our system improves only slightly starting from the dataset of about 2,000 sentences. Nevertheless, the curve has not converged, indicating that the system could achieve a better accuracy when a larger dataset is available. Generalizing to Unseen Words In this section, we report our effort to extend the applicability of our SRL system to new text domain where rare or unknown words are common. As seen in the previous systems, some important features of our SRL system are word features including predicates and head words. As in most NLP tasks, the words are usually encoded as symbolic identifiers which are drawn from a vocabulary. Therefore, they are often represented by one-hot vectors (also called indicator vectors) of the same length as the size of the vocabulary. This representation suffers from two major problems. The first problem is data sparseness, that is, the parameters corresponding to rare or unknown words are poorly estimated. The second problem is that it is not able to capture the semantic similarity between closely related words. This limitation of the one-hot word representation has motivated unsupervised methods for inducing word representations over large, unlabelled corpora. Recently, distributed representations of words have been shown to be advantageous for many natural language processing tasks. A distributed representation is dense, low dimensional and real-valued. Distributed word representations are called word embeddings. Each dimension of the embedding represents a latent feature of the word which hopefully captures useful syntactic and semantic similarities BIBREF28 . Word embeddings are typically induced using neural language models, which use neural networks as the underlying predictive model. Historically, training and testing of neural language models has been slow, scaling as the size of the vocabulary for each model computation BIBREF29 . However, many approaches have been recently proposed to speed up the training process, allowing scaling to very large corpora BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . Another method to produce word embeddings has been introduced recently by the natural language processing group at the Stanford university BIBREF34 . They proposed a global log-bilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. We present in the subsections UID115 and UID121 how we use a neural language model and a global log-bilinear regression model, respectively, to produce word embeddings for Vietnamese which are used in this study. We use word embeddings produced by Mikolov's continuous Skip-gram model using the neural network and source code introduced in BIBREF35 . The continuous skip-gram model itself is described in details in BIBREF33 . For our experiments we used a continuous skip-gram window of size 2, i.e. the actual context size for each training sample is a random number up to 2. The neural network uses the central word in the context to predict the other words, by maximizing the average conditional log probability $$\frac{1}{T} \sum \limits _{t=1}^T \sum \limits _{j=-c}^c \log p(w_{t+j}|w_t),$$ (Eq. 116) where $\lbrace w_i: i \in T\rbrace $ is the whole training set, $w_t$ is the central word and the $w_{t+j}$ are on either side of the context. The conditional probabilities are defined by the softmax function $$p(a|b) = \frac{\exp (o_a^\top i_b)}{\sum \limits _{w \in \mathcal {V}} \exp (o_w^\top i_b)},$$ (Eq. 117) where $i_w$ and $o_w$ are the input and output vector of $w$ respectively, and $\mathcal {V}$ is the vocabulary. For computational efficiency, Mikolov's training code approximates the softmax function by the hierarchical softmax, as defined in BIBREF30 . Here the hierarchical softmax is built on a binary Huffman tree with one word at each leaf node. The conditional probabilities are calculated according to the decomposition: $$p(a|b) = \prod \limits _{i=1}^l p(d_i(a)|d_1(a)... d_{i-1}(a), b),$$ (Eq. 118) where $l$ is the path length from the root to the node $a$ , and $d_i(a)$ is the decision at step $i$ on the path (for example 0 if the next node the left child of the current node, and 1 if it is the right child). If the tree is balanced, the hierarchical softmax only needs to compute around $\log _2 |\mathcal {V}|$ nodes in the tree, while the true softmax requires computing over all $|\mathcal {V}|$ words. The training code was obtained from the tool word2vec and we used frequent word subsampling as well as a word appearance threshold of 5. The output dimension is set to 50, i.e. each word is mapped to a unit vector in $\mathbb {R}^{50}$ . This is deemed adequate for our purpose without overfitting the training data. Figure 10 shows the scatter plot of some Vietnamese words which are projected onto the first two principal components after performing the principal component analysis of all the word distributed representations. We can see that semantically related words are grouped closely together. Pennington, Socher, and Manning BIBREF34 introduced the global vector model for learning word representations (GloVe). Similar to the Skip-gram model, GloVe is a local context window method but it has the advantages of the global matrix factorization method. The main idea of GloVe is to use word-word occurrence counts to estimate the co-occurrence probabilities rather than the probabilities by themselves. Let $P_{ij}$ denote the probability that word $j$ appear in the context of word $i$ ; $w_i \in \mathbb {R}^d$ and $w_j \in \mathbb {R}^d$ denote the word vectors of word $i$ and word $j$ respectively. It is shown that $$w_i^{\top } w_j = \log (P_{ij}) = \log (C_{ij}) - \log (C_i),$$ (Eq. 122) where $C_{ij}$ is the number of times word $j$ occurs in the context of word $i$ . It turns out that GloVe is a global log-bilinear regression model. Finding word vectors is equivalent to solving a weighted least-squares regression model with the cost function: $$J = \sum _{i,j = 1}^n f(C_{ij})(w_i^{\top } w_j + b_i + b_j - \log (C_{ij}))^2,$$ (Eq. 123) where $n$ is the size of the vocabulary, $b_i$ and $b_j$ are additional bias terms and $f(C_{ij})$ is a weighting function. A class of weighting functions which are found to work well can be parameterized as $$f(x) ={\left\lbrace \begin{array}{ll} \left(\frac{x}{x_{\max }}\right)^\alpha \text{if } x x_{\max } \\ 1 \text{otherwise} \end{array}\right.}$$ (Eq. 124) The training code was obtained from the tool GloVe and we used a word appearance threshold of 2,000. Figure 11 shows the scatter plot of the same words in Figure 10 , but this time their word vectors are produced by the GloVe model. To create distributed word representations, we use a dataset consisting of 7.3GB of text from 2 million articles collected through a Vietnamese news portal. The text is first normalized to lower case and all special characters are removed except these common symbols: the comma, the semicolon, the colon, the full stop and the percentage sign. All numeral sequences are replaced with the special token number, so that correlations between certain words and numbers are correctly recognized by the neural network or the log-bilinear regression model. Each word in the Vietnamese language may consist of more than one syllables with spaces in between, which could be regarded as multiple words by the unsupervised models. Hence it is necessary to replace the spaces within each word with underscores to create full word tokens. The tokenization process follows the method described in BIBREF36 . After removal of special characters and tokenization, the articles add up to 969 million word tokens, spanning a vocabulary of $1.5$ million unique tokens. We train the unsupervised models with the full vocabulary to obtain the representation vectors, and then prune the collection of word vectors to the $65,000$ most frequent words, excluding special symbols and the token number representing numeral sequences. We train the two word embedding models on the same text corpus presented in the previous subsections to produce distributed word representations, where each word is represented by a real-valued vector of 50 dimensions. In the last experiment, we replace predicate or head word features in our SRL system by their corresponding word vectors. For predicates which are composed of multiple words, we first tokenize them into individual words and then average their vectors to get vector representations. Table 14 and Table 15 shows performances of the Skip-gram and GloVe models for predicate feature and for head word feature, respectively. We see that both of the two types of word embeddings do not decrease the accuracy of the system. In other words, their use can help generalize the system to unseen words. Conclusion We have presented our work on developing a semantic role labelling system for the Vietnamese language. The system comprises two main component, a corpus and a software. Our system achieves a good accuracy of about 74.8% of $F_1$ score. We have argued that one cannot assume a good applicability of existing methods and tools developed for English and other occidental languages and that they may not offer a cross-language validity. For an isolating language such as Vietnamese, techniques developed for inflectional languages cannot be applied “as is”. In particular, we have developed an algorithm for extracting argument candidates which has a better accuracy than the 1-1 node mapping algorithm. We have proposed some novel features which are proved to be useful for Vietnamese semantic role labelling, notably and function tags and distributed word representations. We have employed integer linear programming, a recent inference technique capable of incorporate a wide variety of linguistic constraints to improve the performance of the system. We have also demonstrated the efficacy of distributed word representations produced by two unsupervised learning models in dealing with unknown words. In the future, we plan to improve further our system, in the one hand, by enlarging our corpus so as to provide more data for the system. On the other hand, we would like to investigate different models used in SRL, for example joint models BIBREF37 , where arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. In addition, we would like to explore the possibility of integrating dynamic constraints in the integer linear programming procedure. We expect the overall performance of our SRL system to improve. Our system, including software and corpus, is available as an open source project for free research purpose and we believe that it is a good baseline for the development and comparison of future Vietnamese SRL systems. We plan to integrate this tool to Vitk, an open-source toolkit for processing Vietnamese text, which contains fundamental processing tools and are readily scalable for processing very large text data. Acknowledgement We would like to thank Vietnamese linguists at Vietnam Centre of Lexicography for their collaboration in developing the Vietnamese PropBank. We would also like to thank the FPT Technology Research Institute for its partial financial aid. The first author is partly funded by the Vietnam National University, Hanoi (VNU) under project number QG.15.04. We are grateful to our anonymous reviewers for their helpful comments.
Yes
b5a2b03cfc5a64ad4542773d38372fffc6d3eac7
b5a2b03cfc5a64ad4542773d38372fffc6d3eac7_0
Q: How are EAC evaluated? Text: Introduction Conversational agents or dialogue systems development are gaining more attention from both industry and academia BIBREF0 , BIBREF1 in the latest years. Some works tried to model them into domain-specific tasks such as customer service BIBREF2 , BIBREF3 , and shopping assistance BIBREF4 . Other works design a multi-purpose agents such as SIRI, Amazon Alexa, and Google Assistance. This domain is a well-researched area in Human-Computer Interaction research community but still become a hot topic now. The main development focus right now is to have an intelligent and humanizing machine to have a better engagement when communicating with human BIBREF5 . Having a better engagement will lead to higher user satisfaction, which becomes the main objective from the industry perspective. In this study, we will only focus on textual conversational agent or chatbot, a conversational artificial intelligence which can conduct a textual communication with a human by exploiting several natural language processing techniques. There are several approaches used to build a chatbot, start by using a simple rule-based approach BIBREF6 , BIBREF7 until more sophisticated one by using neural-based technique BIBREF8 , BIBREF9 . Nowadays, chatbots are mostly used as customer service such as booking systems BIBREF10 , BIBREF11 , shopping assistance BIBREF3 or just as conversational partner such as Endurance and Insomnobot . Therefore, there is a significant urgency to humanize chatbot for having a better user-engagement. Some works were already proposed several approaches to improve chatbot's user-engagement, such as building a context-aware chatbot BIBREF12 and injecting personality into the machine BIBREF13 . Other works also try to incorporate affective computing to build emotionally-aware chatbots BIBREF2 , BIBREF14 , BIBREF15 . Some existing studies shows that adding emotion information into dialogue systems is able to improve user-satisfaction BIBREF16 , BIBREF17 . Emotion information contribute to a more positive interaction between machine and human, which lead to reduce miscommunication BIBREF18 . Some previous studies also found that using affect information can help chatbot to understand users' emotional state, in order to generate better response BIBREF19 . Not only emotion, another study also introduce the use of tones to improve satisfactory service. For instance, using empathetic tone is able to reduces user stress and results in more engagement. BIBREF2 found that tones is an important aspect in building customer care chatbot. They discover eight different tones including anxious, frustrated, impolite, passionate, polite, sad, satisfied, and empathetic. In this paper, we will try to summarize some previous studies which focus on injecting emotion information into chatbots, on discovering recent issues and barriers in building engaging emotionally-aware chatbots. Therefore, we propose some research questions to have a better problem definition: This paper will be organized as follows: Section 2 introduces the history of the relation between affective information with chatbots. Section 3 outline some works which try to inject affective information into chatbots. Section 4 summarizes some affective resources which can be utilized to provide affective information. Then, Section 5 describes some evaluation metric that already applied in some previous works related to emotionally-aware chatbots. Last Section 6 will conclude the rest of the paper and provide a prediction of future development in this research direction based on our analysis. History of Emotionally-Aware Chatbot The early development of chatbot was inspired by Turing test in 1950 BIBREF20 . Eliza was the first publicly known chatbot, built by using simple hand-crafted script BIBREF21 . Parry BIBREF22 was another chatbot which successfully passed the Turing test. Similar to Eliza, Parry still uses a rule-based approach but with a better understanding, including the mental model that can stimulate emotion. Therefore, Parry is the first chatbot which involving emotion in its development. Also, worth to be mentioned is ALICE (Artificial Linguistic Internet Computer Entity), a customizable chatbot by using Artificial Intelligence Markup Language (AIML). Therefore, ALICE still also use a rule-based approach by executing a pattern-matcher recursively to obtain the response. Then in May 2014, Microsoft introduced XiaoIce BIBREF23 , an empathetic social chatbot which is able to recognize users' emotional needs. XiaoIce can provide an engaging interpersonal communication by giving encouragement or other affective messages, so that can hold human attention during communication. Nowadays, most of chatbots technologies were built by using neural-based approach. Emotional Chatting Machine (ECM) BIBREF15 was the first works which exploiting deep learning approach in building a large-scale emotionally-aware conversational bot. Then several studies were proposed to deal with this research area by introducing emotion embedding representation BIBREF24 , BIBREF25 , BIBREF26 or modeling as reinforcement learning problem BIBREF27 , BIBREF28 . Most of these studies used encoder-decoder architecture, specifically sequence to sequence (seq2seq) learning. Some works also tried to introduce a new dataset in order to have a better gold standard and improve system performance. BIBREF14 introduce EMPATHETICDIALOGUES dataset, a novel dataset containing 25k conversations include emotional contexts information to facilitate training and evaluating the textual conversational system. Then, work from BIBREF2 produce a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries. This dataset was used to build tone-aware customer care chatbot. Finally, BIBREF29 tried to enhance SEMAINE corpus BIBREF30 by using crowdsourcing scenario to obtain a human judgement for deciding which response that elicits positive emotion. Their dataset was used to develop a chatbot which captures human emotional states and elicits positive emotion during the conversation. Building Emotionally-Aware Chatbot (EAC) As we mentioned before that emotion is an essential aspect of building humanize chatbot. The rise of the emotionally-aware chatbot is started by Parry BIBREF22 in early 1975. Now, most of EAC development exploits neural-based model. In this section, we will try to review previous works which focus on EAC development. Table TABREF10 summarizes this information includes the objective and exploited approach of each work. In early development, EAC is designed by using a rule-based approach. However, in recent years mostly EAC exploit neural-based approach. Studies in EAC development become a hot topic start from 2017, noted by the first shared task in Emotion Generation Challenge on NLPCC 2017 BIBREF31 . Based on Table TABREF10 this research line continues to gain massive attention from scholars in the latest years. Based on Table TABREF10 , we can see that most of all recent EAC was built by using encoder-decoder architecture with sequence-to-sequence learning. These seq2seq learning models maximize the likelihood of response and are prepared to incorporate rich data to generate an appropriate answer. Basic seq2seq architecture structured of two recurrent neural networks (RNNs), one as an encoder processing the input and one as a decoder generating the response. long short term memory (LSTM) or gated recurrent unit (GRU) was the most dominant variant of RNNs which used to learn the conversational dataset in these models. Some studies also tried to model this task as a reinforcement learning task, in order to get more generic responses and let the chatbot able to achieve successful long-term conversation. Attention mechanism was also introduced in this report. This mechanism will allow the decoder to focus only on some important parts in the input at every decoding step. Another vital part of building EAC is emotion classifier to detect emotion contained in the text to produce a more meaningful response. Emotion detection is a well-established task in natural language processing research area. This task was promoted in two latest series of SemEval-2018 (Task 1) and SemEval-2019 (Task 3). Some tasks were focusing on classifying utterance into several categories of emotion BIBREF32 . However, there is also a task which trying to predict the emotion intensities contained in the text BIBREF33 . In the early development of emotion classifier, most of the studies proposed to use traditional machine-learning approach. However, the neural-based approach is able to gain better performance, which leads more scholars to exploit it to deal with this task. In chatbot, the system will generate several responses based on several emotion categories. Then the system will respond with the most appropriate emotion based on emotion detected on posted utterance by emotion classifier. Based on Table TABREF10 , studies have different emotion categories based on their focus and objective in building chatbots. Resource for Building EAC In this section, we try to investigate the available resources in building EAC. As other artificial intelligent agents, building chatbot also needs a dataset to be learned, to be able to produce a meaningful conversation as a human-like agent. Therefore, some studies propose dataset which contains textual conversation annotated by different emotion categories. Table TABREF12 summarizes the available dataset that found in recent years. We categorize the dataset based on the language, source of the data, and further description which contains some information such as annotation approach, size of instances, and emotion labels. All datasets were proposed during 2017 and 2018, started by dataset provided by NLPCC 2017 Shared Task on Emotion Generation Challenge organizers. This dataset gathered from Sina Weibo social media , so it consists of social conversation in Chinese. Based on our study, all of the datasets that we discover are only available in two languages, English and Chinese. However, the source of these datasets is very diverse such as social media (Twitter, Sina Weibo, and Facebook Message), online content, and human writing through crowdsourcing scenario. Our investigation found that every dataset use different set of emotion label depends on its focus and objective in building the chatbot. As we mentioned in the previous section, that emotion classifier is also an integral part of the emotionally-aware chatbot. In building emotion classifier, there are also available several affective resources, which already widely used in the emotion classification task. Table TABREF16 shows the available affective resources we discovered, range from old resources such as LIWC, ANEW, and DAL, until the modern version such as DepecheMood and EmoWordNet. Based on some prior studies, emotion can be classified into two fundamental viewpoints, including discrete categories and dimensional models. Meanwhile, this feature view emotion as a discrete category which differentiates emotion into several primary emotion. The most popular was proposed by BIBREF37 that differentiate emotion into six categories including anger, disgust, fear, happiness, sadness, and surprise. Three different resources will be used to get emotion contained in the tweet, including EmoLex, EmoSenticNet, LIWC, DepecheMood, and EmoWordNet. Dimensional model is another viewpoint of emotion that define emotion according to one or more dimension. Based on the dimensional approach, emotion is a coincidence value on some different strategic dimensions. We will use two various lexical resources to get the information about the emotional dimension of each tweet, including Dictionary of Affect in Language (DAL), ANEW, and NRC VAD. Evaluating EAC We characterize the evaluation of Emotionally-Aware Chatbot into two different parts, qualitative and quantitative assessment. Qualitative assessment will focus on assessing the functionality of the software, while quantitative more focus on measure the chatbots' performance with a number. Qualitative Assessment Based on our investigation of several previous studies, we found that most of the works utilized ISO 9241 to assess chatbots' quality by focusing on the usability aspect. This aspect can be grouped into three focuses, including efficiency, effectiveness, and satisfaction, concerning systems' performance to achieve the specified goals. Here we will explain every focus based on several categories and quality attributes. Efficiency aspect focuses on several categories, including robustness to manipulation and unexpected input BIBREF55 . Another study tries to asses the chatbots' ability to control damage and inappropriate utterance BIBREF56 . Effectiveness aspect covers two different categories, functionality and humanity. In functionality point of view, a study by BIBREF57 propose to asses how a chatbot can interpret command accurately and provide its status report. Other functionalities, such as chatbots' ability to execute the task as requested, the output linguistic accuracy, and ease of use suggested being assessed BIBREF58 . Meanwhile, from the human aspect, most of the studies suggest each conversational machine should pass Turing test BIBREF21 . Other prominent abilities that chatbot needs to be mastered can respond to specific questions and able to maintain themed discussion. Satisfaction aspect has three categories, including affect, ethics and behaviour, and accessibility. Affect is the most suitable assessment categories for EAC. This category asses several quality aspects such as, chatbots' ability to convey personality, give conversational cues, provide emotional information through tone, inflexion, and expressivity, entertain and/or enable the participant to enjoy the interaction and also read and respond to moods of human participant BIBREF59 . Ethic and behaviour category focuses on how a chatbot can protect and respect privacy BIBREF57 . Other quality aspects, including sensitivity to safety and social concerns and trustworthiness BIBREF60 . The last categories are accessibility, which the main quality aspect focus to assess the chatbot ability to detect meaning or intent and, also responds to social cues . Quantitative Assessment In automatic evaluation, some studies focus on evaluating the system at emotion level BIBREF15 , BIBREF28 . Therefore, some common metrics such as precision, recall, and accuracy are used to measure system performance, compared to the gold label. This evaluation is similar to emotion classification tasks such as previous SemEval 2018 BIBREF32 and SemEval 2019 . Other studies also proposed to use perplexity to evaluate the model at the content level (to determine whether the content is relevant and grammatical) BIBREF14 , BIBREF39 , BIBREF28 . This evaluation metric is widely used to evaluate dialogue-based systems which rely on probabilistic approach BIBREF61 . Another work by BIBREF14 used BLEU to evaluate the machine response and compare against the gold response (the actual response), although using BLEU to measure conversation generation task is not recommended by BIBREF62 due to its low correlation with human judgment. This evaluation involves human judgement to measure the chatbots' performance, based on several criteria. BIBREF15 used three annotators to rate chatbots' response in two criteria, content (scale 0,1,2) and emotion (scale 0,1). Content is focused on measuring whether the response is natural acceptable and could plausible produced by a human. This metric measurement is already adopted and recommended by researchers and conversation challenging tasks, as proposed in BIBREF38 . Meanwhile, emotion is defined as whether the emotion expression contained in the response agrees with the given gold emotion category. Similarly, BIBREF28 used four annotators to score the response based on consistency, logic and emotion. Consistency measures the fluency and grammatical aspect of the response. Logic measures the degree whether the post and response logically match. Emotion measures the response, whether it contains the appropriate emotion. All of these aspects were measured by three scales 0, 1, and 2. Meanwhile, BIBREF39 proposed naturalness and emotion impact as criteria to evaluate the chatbots' response. Naturalness evaluates whether the response is intelligible, logically follows the context of the conversation, and acceptable as a human response, while emotion impact measures whether the response elicits a positive emotional or triggers an emotionally-positive dialogue, since their study focus only on positive emotion. Another study by BIBREF14 uses crowdsourcing to gather human judgement based on three aspects of performance including empathy/sympathy - did the responses show understanding of the feelings of the person talking about their experience?; relevance - did the responses seem appropriate to the conversation? Were they on-topic?; and fluency - could you understand the responses? Did the language seem accurate?. All of these aspects recorded with three different response, i.e., (1: not at all, 3: somewhat, 5: very much) from around 100 different annotators. After getting all of the human judgement with different criteria, some of these studies used a t-test to get the statistical significance BIBREF28 , BIBREF39 , while some other used inter-annotator agreement measurement such as Fleiss Kappa BIBREF15 , BIBREF14 . Based on these evaluations, they can compare their system performance with baseline or any other state of the art systems. Related Work There is some work which tried to provide a full story of chatbot development both in industries and academic environment. However, as far as my knowledge, there is still no study focused on summarizing the development of chatbot which taking into account the emotional aspect, that getting more attention in recent years. BIBREF63 provides a long history of chatbots technology development. They also described several uses of chatbots' in some practical domains such as tools entertainment, tools to learn and practice language, information retrieval tools, and assistance for e-commerce of other business activities. Then, BIBREF64 reviewed the development of chatbots from rudimentary model to more advanced intelligent system. They summarized several techniques used to develop chatbots from early development until recent years. Recently, BIBREF65 provide a more systematic shape to review some previous works on chatbots' development. They classify chatbots into two main categories based on goals, including task-oriented chatbots and non-task oriented chatbot. They also classify chatbot based on its development technique into three main categories, including rule-based, retrieval-based, and generative-based approach. Furthermore, they also summarized the detailed technique on these three main approaches. Discussion and Conclusion In this work, a systematic review of emotionally-aware chatbots is proposed. We focus on three main issues, including, how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance. The rise of EAC was started by Parry, which uses a simple rule-based approach. Now, most of EAC are built by using a neural-based approach, by exploiting emotion classifier to detect emotion contained in the text. In the modern era, the development of EAC gains more attention since Emotion Generation Challenge shared task on NLPCC 2017. In this era, most EAC is developed by adopting encoder-decoder architecture with sequence-to-sequence learning. Some variant of the recurrent neural network is used in the learning process, including long-short-term memory (LSTM) and gated recurrent unit (GRU). There are also some datasets available for developing EAC now. However, the datasets are only available in English and Chinese. These datasets are gathered from various sources, including social media, online website and manual construction by crowdsourcing. Overall, the difference between these datasets and the common datasets for building chatbot is the presence of an emotion label. In addition, we also investigate the available affective resources which usually use in the emotion classification task. In this part, we only focus on English resources and found several resources from the old one such as LIWC and Emolex to the new one, including DepecheMood and EmoWordNet. In the final part, we gather information about how to evaluate the performance of EAC, and we can classify the approach into two techniques, including qualitative and quantitative assessment. For qualitative assessment, most studies used ISO 9241, which covers several aspects such as efficiency, effectiveness, and satisfaction. While in quantitative analysis, two techniques can be used, including automatic evaluation (by using perplexity) and manual evaluation (involving human judgement). Overall, we can see that effort to humanize chatbots by incorporation affective aspect is becoming the hot topic now. We also predict that this development will continue by going into multilingual perspective since up to now every chatbot only focusing on one language. Also, we think that in the future the studies of humanizing chatbot are not only utilized emotion information but will also focus on a contextual-aware chatbot.
Qualitatively through efficiency, effectiveness and satisfaction aspects and quantitatively through metrics such as precision, recall, accuracy, BLEU score and even human judgement.
b093b440ae3cd03555237791550f3224d159d85b
b093b440ae3cd03555237791550f3224d159d85b_0
Q: What are the currently available datasets for EAC? Text: Introduction Conversational agents or dialogue systems development are gaining more attention from both industry and academia BIBREF0 , BIBREF1 in the latest years. Some works tried to model them into domain-specific tasks such as customer service BIBREF2 , BIBREF3 , and shopping assistance BIBREF4 . Other works design a multi-purpose agents such as SIRI, Amazon Alexa, and Google Assistance. This domain is a well-researched area in Human-Computer Interaction research community but still become a hot topic now. The main development focus right now is to have an intelligent and humanizing machine to have a better engagement when communicating with human BIBREF5 . Having a better engagement will lead to higher user satisfaction, which becomes the main objective from the industry perspective. In this study, we will only focus on textual conversational agent or chatbot, a conversational artificial intelligence which can conduct a textual communication with a human by exploiting several natural language processing techniques. There are several approaches used to build a chatbot, start by using a simple rule-based approach BIBREF6 , BIBREF7 until more sophisticated one by using neural-based technique BIBREF8 , BIBREF9 . Nowadays, chatbots are mostly used as customer service such as booking systems BIBREF10 , BIBREF11 , shopping assistance BIBREF3 or just as conversational partner such as Endurance and Insomnobot . Therefore, there is a significant urgency to humanize chatbot for having a better user-engagement. Some works were already proposed several approaches to improve chatbot's user-engagement, such as building a context-aware chatbot BIBREF12 and injecting personality into the machine BIBREF13 . Other works also try to incorporate affective computing to build emotionally-aware chatbots BIBREF2 , BIBREF14 , BIBREF15 . Some existing studies shows that adding emotion information into dialogue systems is able to improve user-satisfaction BIBREF16 , BIBREF17 . Emotion information contribute to a more positive interaction between machine and human, which lead to reduce miscommunication BIBREF18 . Some previous studies also found that using affect information can help chatbot to understand users' emotional state, in order to generate better response BIBREF19 . Not only emotion, another study also introduce the use of tones to improve satisfactory service. For instance, using empathetic tone is able to reduces user stress and results in more engagement. BIBREF2 found that tones is an important aspect in building customer care chatbot. They discover eight different tones including anxious, frustrated, impolite, passionate, polite, sad, satisfied, and empathetic. In this paper, we will try to summarize some previous studies which focus on injecting emotion information into chatbots, on discovering recent issues and barriers in building engaging emotionally-aware chatbots. Therefore, we propose some research questions to have a better problem definition: This paper will be organized as follows: Section 2 introduces the history of the relation between affective information with chatbots. Section 3 outline some works which try to inject affective information into chatbots. Section 4 summarizes some affective resources which can be utilized to provide affective information. Then, Section 5 describes some evaluation metric that already applied in some previous works related to emotionally-aware chatbots. Last Section 6 will conclude the rest of the paper and provide a prediction of future development in this research direction based on our analysis. History of Emotionally-Aware Chatbot The early development of chatbot was inspired by Turing test in 1950 BIBREF20 . Eliza was the first publicly known chatbot, built by using simple hand-crafted script BIBREF21 . Parry BIBREF22 was another chatbot which successfully passed the Turing test. Similar to Eliza, Parry still uses a rule-based approach but with a better understanding, including the mental model that can stimulate emotion. Therefore, Parry is the first chatbot which involving emotion in its development. Also, worth to be mentioned is ALICE (Artificial Linguistic Internet Computer Entity), a customizable chatbot by using Artificial Intelligence Markup Language (AIML). Therefore, ALICE still also use a rule-based approach by executing a pattern-matcher recursively to obtain the response. Then in May 2014, Microsoft introduced XiaoIce BIBREF23 , an empathetic social chatbot which is able to recognize users' emotional needs. XiaoIce can provide an engaging interpersonal communication by giving encouragement or other affective messages, so that can hold human attention during communication. Nowadays, most of chatbots technologies were built by using neural-based approach. Emotional Chatting Machine (ECM) BIBREF15 was the first works which exploiting deep learning approach in building a large-scale emotionally-aware conversational bot. Then several studies were proposed to deal with this research area by introducing emotion embedding representation BIBREF24 , BIBREF25 , BIBREF26 or modeling as reinforcement learning problem BIBREF27 , BIBREF28 . Most of these studies used encoder-decoder architecture, specifically sequence to sequence (seq2seq) learning. Some works also tried to introduce a new dataset in order to have a better gold standard and improve system performance. BIBREF14 introduce EMPATHETICDIALOGUES dataset, a novel dataset containing 25k conversations include emotional contexts information to facilitate training and evaluating the textual conversational system. Then, work from BIBREF2 produce a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries. This dataset was used to build tone-aware customer care chatbot. Finally, BIBREF29 tried to enhance SEMAINE corpus BIBREF30 by using crowdsourcing scenario to obtain a human judgement for deciding which response that elicits positive emotion. Their dataset was used to develop a chatbot which captures human emotional states and elicits positive emotion during the conversation. Building Emotionally-Aware Chatbot (EAC) As we mentioned before that emotion is an essential aspect of building humanize chatbot. The rise of the emotionally-aware chatbot is started by Parry BIBREF22 in early 1975. Now, most of EAC development exploits neural-based model. In this section, we will try to review previous works which focus on EAC development. Table TABREF10 summarizes this information includes the objective and exploited approach of each work. In early development, EAC is designed by using a rule-based approach. However, in recent years mostly EAC exploit neural-based approach. Studies in EAC development become a hot topic start from 2017, noted by the first shared task in Emotion Generation Challenge on NLPCC 2017 BIBREF31 . Based on Table TABREF10 this research line continues to gain massive attention from scholars in the latest years. Based on Table TABREF10 , we can see that most of all recent EAC was built by using encoder-decoder architecture with sequence-to-sequence learning. These seq2seq learning models maximize the likelihood of response and are prepared to incorporate rich data to generate an appropriate answer. Basic seq2seq architecture structured of two recurrent neural networks (RNNs), one as an encoder processing the input and one as a decoder generating the response. long short term memory (LSTM) or gated recurrent unit (GRU) was the most dominant variant of RNNs which used to learn the conversational dataset in these models. Some studies also tried to model this task as a reinforcement learning task, in order to get more generic responses and let the chatbot able to achieve successful long-term conversation. Attention mechanism was also introduced in this report. This mechanism will allow the decoder to focus only on some important parts in the input at every decoding step. Another vital part of building EAC is emotion classifier to detect emotion contained in the text to produce a more meaningful response. Emotion detection is a well-established task in natural language processing research area. This task was promoted in two latest series of SemEval-2018 (Task 1) and SemEval-2019 (Task 3). Some tasks were focusing on classifying utterance into several categories of emotion BIBREF32 . However, there is also a task which trying to predict the emotion intensities contained in the text BIBREF33 . In the early development of emotion classifier, most of the studies proposed to use traditional machine-learning approach. However, the neural-based approach is able to gain better performance, which leads more scholars to exploit it to deal with this task. In chatbot, the system will generate several responses based on several emotion categories. Then the system will respond with the most appropriate emotion based on emotion detected on posted utterance by emotion classifier. Based on Table TABREF10 , studies have different emotion categories based on their focus and objective in building chatbots. Resource for Building EAC In this section, we try to investigate the available resources in building EAC. As other artificial intelligent agents, building chatbot also needs a dataset to be learned, to be able to produce a meaningful conversation as a human-like agent. Therefore, some studies propose dataset which contains textual conversation annotated by different emotion categories. Table TABREF12 summarizes the available dataset that found in recent years. We categorize the dataset based on the language, source of the data, and further description which contains some information such as annotation approach, size of instances, and emotion labels. All datasets were proposed during 2017 and 2018, started by dataset provided by NLPCC 2017 Shared Task on Emotion Generation Challenge organizers. This dataset gathered from Sina Weibo social media , so it consists of social conversation in Chinese. Based on our study, all of the datasets that we discover are only available in two languages, English and Chinese. However, the source of these datasets is very diverse such as social media (Twitter, Sina Weibo, and Facebook Message), online content, and human writing through crowdsourcing scenario. Our investigation found that every dataset use different set of emotion label depends on its focus and objective in building the chatbot. As we mentioned in the previous section, that emotion classifier is also an integral part of the emotionally-aware chatbot. In building emotion classifier, there are also available several affective resources, which already widely used in the emotion classification task. Table TABREF16 shows the available affective resources we discovered, range from old resources such as LIWC, ANEW, and DAL, until the modern version such as DepecheMood and EmoWordNet. Based on some prior studies, emotion can be classified into two fundamental viewpoints, including discrete categories and dimensional models. Meanwhile, this feature view emotion as a discrete category which differentiates emotion into several primary emotion. The most popular was proposed by BIBREF37 that differentiate emotion into six categories including anger, disgust, fear, happiness, sadness, and surprise. Three different resources will be used to get emotion contained in the tweet, including EmoLex, EmoSenticNet, LIWC, DepecheMood, and EmoWordNet. Dimensional model is another viewpoint of emotion that define emotion according to one or more dimension. Based on the dimensional approach, emotion is a coincidence value on some different strategic dimensions. We will use two various lexical resources to get the information about the emotional dimension of each tweet, including Dictionary of Affect in Language (DAL), ANEW, and NRC VAD. Evaluating EAC We characterize the evaluation of Emotionally-Aware Chatbot into two different parts, qualitative and quantitative assessment. Qualitative assessment will focus on assessing the functionality of the software, while quantitative more focus on measure the chatbots' performance with a number. Qualitative Assessment Based on our investigation of several previous studies, we found that most of the works utilized ISO 9241 to assess chatbots' quality by focusing on the usability aspect. This aspect can be grouped into three focuses, including efficiency, effectiveness, and satisfaction, concerning systems' performance to achieve the specified goals. Here we will explain every focus based on several categories and quality attributes. Efficiency aspect focuses on several categories, including robustness to manipulation and unexpected input BIBREF55 . Another study tries to asses the chatbots' ability to control damage and inappropriate utterance BIBREF56 . Effectiveness aspect covers two different categories, functionality and humanity. In functionality point of view, a study by BIBREF57 propose to asses how a chatbot can interpret command accurately and provide its status report. Other functionalities, such as chatbots' ability to execute the task as requested, the output linguistic accuracy, and ease of use suggested being assessed BIBREF58 . Meanwhile, from the human aspect, most of the studies suggest each conversational machine should pass Turing test BIBREF21 . Other prominent abilities that chatbot needs to be mastered can respond to specific questions and able to maintain themed discussion. Satisfaction aspect has three categories, including affect, ethics and behaviour, and accessibility. Affect is the most suitable assessment categories for EAC. This category asses several quality aspects such as, chatbots' ability to convey personality, give conversational cues, provide emotional information through tone, inflexion, and expressivity, entertain and/or enable the participant to enjoy the interaction and also read and respond to moods of human participant BIBREF59 . Ethic and behaviour category focuses on how a chatbot can protect and respect privacy BIBREF57 . Other quality aspects, including sensitivity to safety and social concerns and trustworthiness BIBREF60 . The last categories are accessibility, which the main quality aspect focus to assess the chatbot ability to detect meaning or intent and, also responds to social cues . Quantitative Assessment In automatic evaluation, some studies focus on evaluating the system at emotion level BIBREF15 , BIBREF28 . Therefore, some common metrics such as precision, recall, and accuracy are used to measure system performance, compared to the gold label. This evaluation is similar to emotion classification tasks such as previous SemEval 2018 BIBREF32 and SemEval 2019 . Other studies also proposed to use perplexity to evaluate the model at the content level (to determine whether the content is relevant and grammatical) BIBREF14 , BIBREF39 , BIBREF28 . This evaluation metric is widely used to evaluate dialogue-based systems which rely on probabilistic approach BIBREF61 . Another work by BIBREF14 used BLEU to evaluate the machine response and compare against the gold response (the actual response), although using BLEU to measure conversation generation task is not recommended by BIBREF62 due to its low correlation with human judgment. This evaluation involves human judgement to measure the chatbots' performance, based on several criteria. BIBREF15 used three annotators to rate chatbots' response in two criteria, content (scale 0,1,2) and emotion (scale 0,1). Content is focused on measuring whether the response is natural acceptable and could plausible produced by a human. This metric measurement is already adopted and recommended by researchers and conversation challenging tasks, as proposed in BIBREF38 . Meanwhile, emotion is defined as whether the emotion expression contained in the response agrees with the given gold emotion category. Similarly, BIBREF28 used four annotators to score the response based on consistency, logic and emotion. Consistency measures the fluency and grammatical aspect of the response. Logic measures the degree whether the post and response logically match. Emotion measures the response, whether it contains the appropriate emotion. All of these aspects were measured by three scales 0, 1, and 2. Meanwhile, BIBREF39 proposed naturalness and emotion impact as criteria to evaluate the chatbots' response. Naturalness evaluates whether the response is intelligible, logically follows the context of the conversation, and acceptable as a human response, while emotion impact measures whether the response elicits a positive emotional or triggers an emotionally-positive dialogue, since their study focus only on positive emotion. Another study by BIBREF14 uses crowdsourcing to gather human judgement based on three aspects of performance including empathy/sympathy - did the responses show understanding of the feelings of the person talking about their experience?; relevance - did the responses seem appropriate to the conversation? Were they on-topic?; and fluency - could you understand the responses? Did the language seem accurate?. All of these aspects recorded with three different response, i.e., (1: not at all, 3: somewhat, 5: very much) from around 100 different annotators. After getting all of the human judgement with different criteria, some of these studies used a t-test to get the statistical significance BIBREF28 , BIBREF39 , while some other used inter-annotator agreement measurement such as Fleiss Kappa BIBREF15 , BIBREF14 . Based on these evaluations, they can compare their system performance with baseline or any other state of the art systems. Related Work There is some work which tried to provide a full story of chatbot development both in industries and academic environment. However, as far as my knowledge, there is still no study focused on summarizing the development of chatbot which taking into account the emotional aspect, that getting more attention in recent years. BIBREF63 provides a long history of chatbots technology development. They also described several uses of chatbots' in some practical domains such as tools entertainment, tools to learn and practice language, information retrieval tools, and assistance for e-commerce of other business activities. Then, BIBREF64 reviewed the development of chatbots from rudimentary model to more advanced intelligent system. They summarized several techniques used to develop chatbots from early development until recent years. Recently, BIBREF65 provide a more systematic shape to review some previous works on chatbots' development. They classify chatbots into two main categories based on goals, including task-oriented chatbots and non-task oriented chatbot. They also classify chatbot based on its development technique into three main categories, including rule-based, retrieval-based, and generative-based approach. Furthermore, they also summarized the detailed technique on these three main approaches. Discussion and Conclusion In this work, a systematic review of emotionally-aware chatbots is proposed. We focus on three main issues, including, how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance. The rise of EAC was started by Parry, which uses a simple rule-based approach. Now, most of EAC are built by using a neural-based approach, by exploiting emotion classifier to detect emotion contained in the text. In the modern era, the development of EAC gains more attention since Emotion Generation Challenge shared task on NLPCC 2017. In this era, most EAC is developed by adopting encoder-decoder architecture with sequence-to-sequence learning. Some variant of the recurrent neural network is used in the learning process, including long-short-term memory (LSTM) and gated recurrent unit (GRU). There are also some datasets available for developing EAC now. However, the datasets are only available in English and Chinese. These datasets are gathered from various sources, including social media, online website and manual construction by crowdsourcing. Overall, the difference between these datasets and the common datasets for building chatbot is the presence of an emotion label. In addition, we also investigate the available affective resources which usually use in the emotion classification task. In this part, we only focus on English resources and found several resources from the old one such as LIWC and Emolex to the new one, including DepecheMood and EmoWordNet. In the final part, we gather information about how to evaluate the performance of EAC, and we can classify the approach into two techniques, including qualitative and quantitative assessment. For qualitative assessment, most studies used ISO 9241, which covers several aspects such as efficiency, effectiveness, and satisfaction. While in quantitative analysis, two techniques can be used, including automatic evaluation (by using perplexity) and manual evaluation (involving human judgement). Overall, we can see that effort to humanize chatbots by incorporation affective aspect is becoming the hot topic now. We also predict that this development will continue by going into multilingual perspective since up to now every chatbot only focusing on one language. Also, we think that in the future the studies of humanizing chatbot are not only utilized emotion information but will also focus on a contextual-aware chatbot.
EMPATHETICDIALOGUES dataset, a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries, SEMAINE corpus BIBREF30
ad16c8261c3a0b88c685907387e1a6904eb15066
ad16c8261c3a0b88c685907387e1a6904eb15066_0
Q: What are the research questions posed in the paper regarding EAC studies? Text: Introduction Conversational agents or dialogue systems development are gaining more attention from both industry and academia BIBREF0 , BIBREF1 in the latest years. Some works tried to model them into domain-specific tasks such as customer service BIBREF2 , BIBREF3 , and shopping assistance BIBREF4 . Other works design a multi-purpose agents such as SIRI, Amazon Alexa, and Google Assistance. This domain is a well-researched area in Human-Computer Interaction research community but still become a hot topic now. The main development focus right now is to have an intelligent and humanizing machine to have a better engagement when communicating with human BIBREF5 . Having a better engagement will lead to higher user satisfaction, which becomes the main objective from the industry perspective. In this study, we will only focus on textual conversational agent or chatbot, a conversational artificial intelligence which can conduct a textual communication with a human by exploiting several natural language processing techniques. There are several approaches used to build a chatbot, start by using a simple rule-based approach BIBREF6 , BIBREF7 until more sophisticated one by using neural-based technique BIBREF8 , BIBREF9 . Nowadays, chatbots are mostly used as customer service such as booking systems BIBREF10 , BIBREF11 , shopping assistance BIBREF3 or just as conversational partner such as Endurance and Insomnobot . Therefore, there is a significant urgency to humanize chatbot for having a better user-engagement. Some works were already proposed several approaches to improve chatbot's user-engagement, such as building a context-aware chatbot BIBREF12 and injecting personality into the machine BIBREF13 . Other works also try to incorporate affective computing to build emotionally-aware chatbots BIBREF2 , BIBREF14 , BIBREF15 . Some existing studies shows that adding emotion information into dialogue systems is able to improve user-satisfaction BIBREF16 , BIBREF17 . Emotion information contribute to a more positive interaction between machine and human, which lead to reduce miscommunication BIBREF18 . Some previous studies also found that using affect information can help chatbot to understand users' emotional state, in order to generate better response BIBREF19 . Not only emotion, another study also introduce the use of tones to improve satisfactory service. For instance, using empathetic tone is able to reduces user stress and results in more engagement. BIBREF2 found that tones is an important aspect in building customer care chatbot. They discover eight different tones including anxious, frustrated, impolite, passionate, polite, sad, satisfied, and empathetic. In this paper, we will try to summarize some previous studies which focus on injecting emotion information into chatbots, on discovering recent issues and barriers in building engaging emotionally-aware chatbots. Therefore, we propose some research questions to have a better problem definition: This paper will be organized as follows: Section 2 introduces the history of the relation between affective information with chatbots. Section 3 outline some works which try to inject affective information into chatbots. Section 4 summarizes some affective resources which can be utilized to provide affective information. Then, Section 5 describes some evaluation metric that already applied in some previous works related to emotionally-aware chatbots. Last Section 6 will conclude the rest of the paper and provide a prediction of future development in this research direction based on our analysis. History of Emotionally-Aware Chatbot The early development of chatbot was inspired by Turing test in 1950 BIBREF20 . Eliza was the first publicly known chatbot, built by using simple hand-crafted script BIBREF21 . Parry BIBREF22 was another chatbot which successfully passed the Turing test. Similar to Eliza, Parry still uses a rule-based approach but with a better understanding, including the mental model that can stimulate emotion. Therefore, Parry is the first chatbot which involving emotion in its development. Also, worth to be mentioned is ALICE (Artificial Linguistic Internet Computer Entity), a customizable chatbot by using Artificial Intelligence Markup Language (AIML). Therefore, ALICE still also use a rule-based approach by executing a pattern-matcher recursively to obtain the response. Then in May 2014, Microsoft introduced XiaoIce BIBREF23 , an empathetic social chatbot which is able to recognize users' emotional needs. XiaoIce can provide an engaging interpersonal communication by giving encouragement or other affective messages, so that can hold human attention during communication. Nowadays, most of chatbots technologies were built by using neural-based approach. Emotional Chatting Machine (ECM) BIBREF15 was the first works which exploiting deep learning approach in building a large-scale emotionally-aware conversational bot. Then several studies were proposed to deal with this research area by introducing emotion embedding representation BIBREF24 , BIBREF25 , BIBREF26 or modeling as reinforcement learning problem BIBREF27 , BIBREF28 . Most of these studies used encoder-decoder architecture, specifically sequence to sequence (seq2seq) learning. Some works also tried to introduce a new dataset in order to have a better gold standard and improve system performance. BIBREF14 introduce EMPATHETICDIALOGUES dataset, a novel dataset containing 25k conversations include emotional contexts information to facilitate training and evaluating the textual conversational system. Then, work from BIBREF2 produce a dataset containing 1.5 million Twitter conversation, gathered by using Twitter API from customer care account of 62 brands across several industries. This dataset was used to build tone-aware customer care chatbot. Finally, BIBREF29 tried to enhance SEMAINE corpus BIBREF30 by using crowdsourcing scenario to obtain a human judgement for deciding which response that elicits positive emotion. Their dataset was used to develop a chatbot which captures human emotional states and elicits positive emotion during the conversation. Building Emotionally-Aware Chatbot (EAC) As we mentioned before that emotion is an essential aspect of building humanize chatbot. The rise of the emotionally-aware chatbot is started by Parry BIBREF22 in early 1975. Now, most of EAC development exploits neural-based model. In this section, we will try to review previous works which focus on EAC development. Table TABREF10 summarizes this information includes the objective and exploited approach of each work. In early development, EAC is designed by using a rule-based approach. However, in recent years mostly EAC exploit neural-based approach. Studies in EAC development become a hot topic start from 2017, noted by the first shared task in Emotion Generation Challenge on NLPCC 2017 BIBREF31 . Based on Table TABREF10 this research line continues to gain massive attention from scholars in the latest years. Based on Table TABREF10 , we can see that most of all recent EAC was built by using encoder-decoder architecture with sequence-to-sequence learning. These seq2seq learning models maximize the likelihood of response and are prepared to incorporate rich data to generate an appropriate answer. Basic seq2seq architecture structured of two recurrent neural networks (RNNs), one as an encoder processing the input and one as a decoder generating the response. long short term memory (LSTM) or gated recurrent unit (GRU) was the most dominant variant of RNNs which used to learn the conversational dataset in these models. Some studies also tried to model this task as a reinforcement learning task, in order to get more generic responses and let the chatbot able to achieve successful long-term conversation. Attention mechanism was also introduced in this report. This mechanism will allow the decoder to focus only on some important parts in the input at every decoding step. Another vital part of building EAC is emotion classifier to detect emotion contained in the text to produce a more meaningful response. Emotion detection is a well-established task in natural language processing research area. This task was promoted in two latest series of SemEval-2018 (Task 1) and SemEval-2019 (Task 3). Some tasks were focusing on classifying utterance into several categories of emotion BIBREF32 . However, there is also a task which trying to predict the emotion intensities contained in the text BIBREF33 . In the early development of emotion classifier, most of the studies proposed to use traditional machine-learning approach. However, the neural-based approach is able to gain better performance, which leads more scholars to exploit it to deal with this task. In chatbot, the system will generate several responses based on several emotion categories. Then the system will respond with the most appropriate emotion based on emotion detected on posted utterance by emotion classifier. Based on Table TABREF10 , studies have different emotion categories based on their focus and objective in building chatbots. Resource for Building EAC In this section, we try to investigate the available resources in building EAC. As other artificial intelligent agents, building chatbot also needs a dataset to be learned, to be able to produce a meaningful conversation as a human-like agent. Therefore, some studies propose dataset which contains textual conversation annotated by different emotion categories. Table TABREF12 summarizes the available dataset that found in recent years. We categorize the dataset based on the language, source of the data, and further description which contains some information such as annotation approach, size of instances, and emotion labels. All datasets were proposed during 2017 and 2018, started by dataset provided by NLPCC 2017 Shared Task on Emotion Generation Challenge organizers. This dataset gathered from Sina Weibo social media , so it consists of social conversation in Chinese. Based on our study, all of the datasets that we discover are only available in two languages, English and Chinese. However, the source of these datasets is very diverse such as social media (Twitter, Sina Weibo, and Facebook Message), online content, and human writing through crowdsourcing scenario. Our investigation found that every dataset use different set of emotion label depends on its focus and objective in building the chatbot. As we mentioned in the previous section, that emotion classifier is also an integral part of the emotionally-aware chatbot. In building emotion classifier, there are also available several affective resources, which already widely used in the emotion classification task. Table TABREF16 shows the available affective resources we discovered, range from old resources such as LIWC, ANEW, and DAL, until the modern version such as DepecheMood and EmoWordNet. Based on some prior studies, emotion can be classified into two fundamental viewpoints, including discrete categories and dimensional models. Meanwhile, this feature view emotion as a discrete category which differentiates emotion into several primary emotion. The most popular was proposed by BIBREF37 that differentiate emotion into six categories including anger, disgust, fear, happiness, sadness, and surprise. Three different resources will be used to get emotion contained in the tweet, including EmoLex, EmoSenticNet, LIWC, DepecheMood, and EmoWordNet. Dimensional model is another viewpoint of emotion that define emotion according to one or more dimension. Based on the dimensional approach, emotion is a coincidence value on some different strategic dimensions. We will use two various lexical resources to get the information about the emotional dimension of each tweet, including Dictionary of Affect in Language (DAL), ANEW, and NRC VAD. Evaluating EAC We characterize the evaluation of Emotionally-Aware Chatbot into two different parts, qualitative and quantitative assessment. Qualitative assessment will focus on assessing the functionality of the software, while quantitative more focus on measure the chatbots' performance with a number. Qualitative Assessment Based on our investigation of several previous studies, we found that most of the works utilized ISO 9241 to assess chatbots' quality by focusing on the usability aspect. This aspect can be grouped into three focuses, including efficiency, effectiveness, and satisfaction, concerning systems' performance to achieve the specified goals. Here we will explain every focus based on several categories and quality attributes. Efficiency aspect focuses on several categories, including robustness to manipulation and unexpected input BIBREF55 . Another study tries to asses the chatbots' ability to control damage and inappropriate utterance BIBREF56 . Effectiveness aspect covers two different categories, functionality and humanity. In functionality point of view, a study by BIBREF57 propose to asses how a chatbot can interpret command accurately and provide its status report. Other functionalities, such as chatbots' ability to execute the task as requested, the output linguistic accuracy, and ease of use suggested being assessed BIBREF58 . Meanwhile, from the human aspect, most of the studies suggest each conversational machine should pass Turing test BIBREF21 . Other prominent abilities that chatbot needs to be mastered can respond to specific questions and able to maintain themed discussion. Satisfaction aspect has three categories, including affect, ethics and behaviour, and accessibility. Affect is the most suitable assessment categories for EAC. This category asses several quality aspects such as, chatbots' ability to convey personality, give conversational cues, provide emotional information through tone, inflexion, and expressivity, entertain and/or enable the participant to enjoy the interaction and also read and respond to moods of human participant BIBREF59 . Ethic and behaviour category focuses on how a chatbot can protect and respect privacy BIBREF57 . Other quality aspects, including sensitivity to safety and social concerns and trustworthiness BIBREF60 . The last categories are accessibility, which the main quality aspect focus to assess the chatbot ability to detect meaning or intent and, also responds to social cues . Quantitative Assessment In automatic evaluation, some studies focus on evaluating the system at emotion level BIBREF15 , BIBREF28 . Therefore, some common metrics such as precision, recall, and accuracy are used to measure system performance, compared to the gold label. This evaluation is similar to emotion classification tasks such as previous SemEval 2018 BIBREF32 and SemEval 2019 . Other studies also proposed to use perplexity to evaluate the model at the content level (to determine whether the content is relevant and grammatical) BIBREF14 , BIBREF39 , BIBREF28 . This evaluation metric is widely used to evaluate dialogue-based systems which rely on probabilistic approach BIBREF61 . Another work by BIBREF14 used BLEU to evaluate the machine response and compare against the gold response (the actual response), although using BLEU to measure conversation generation task is not recommended by BIBREF62 due to its low correlation with human judgment. This evaluation involves human judgement to measure the chatbots' performance, based on several criteria. BIBREF15 used three annotators to rate chatbots' response in two criteria, content (scale 0,1,2) and emotion (scale 0,1). Content is focused on measuring whether the response is natural acceptable and could plausible produced by a human. This metric measurement is already adopted and recommended by researchers and conversation challenging tasks, as proposed in BIBREF38 . Meanwhile, emotion is defined as whether the emotion expression contained in the response agrees with the given gold emotion category. Similarly, BIBREF28 used four annotators to score the response based on consistency, logic and emotion. Consistency measures the fluency and grammatical aspect of the response. Logic measures the degree whether the post and response logically match. Emotion measures the response, whether it contains the appropriate emotion. All of these aspects were measured by three scales 0, 1, and 2. Meanwhile, BIBREF39 proposed naturalness and emotion impact as criteria to evaluate the chatbots' response. Naturalness evaluates whether the response is intelligible, logically follows the context of the conversation, and acceptable as a human response, while emotion impact measures whether the response elicits a positive emotional or triggers an emotionally-positive dialogue, since their study focus only on positive emotion. Another study by BIBREF14 uses crowdsourcing to gather human judgement based on three aspects of performance including empathy/sympathy - did the responses show understanding of the feelings of the person talking about their experience?; relevance - did the responses seem appropriate to the conversation? Were they on-topic?; and fluency - could you understand the responses? Did the language seem accurate?. All of these aspects recorded with three different response, i.e., (1: not at all, 3: somewhat, 5: very much) from around 100 different annotators. After getting all of the human judgement with different criteria, some of these studies used a t-test to get the statistical significance BIBREF28 , BIBREF39 , while some other used inter-annotator agreement measurement such as Fleiss Kappa BIBREF15 , BIBREF14 . Based on these evaluations, they can compare their system performance with baseline or any other state of the art systems. Related Work There is some work which tried to provide a full story of chatbot development both in industries and academic environment. However, as far as my knowledge, there is still no study focused on summarizing the development of chatbot which taking into account the emotional aspect, that getting more attention in recent years. BIBREF63 provides a long history of chatbots technology development. They also described several uses of chatbots' in some practical domains such as tools entertainment, tools to learn and practice language, information retrieval tools, and assistance for e-commerce of other business activities. Then, BIBREF64 reviewed the development of chatbots from rudimentary model to more advanced intelligent system. They summarized several techniques used to develop chatbots from early development until recent years. Recently, BIBREF65 provide a more systematic shape to review some previous works on chatbots' development. They classify chatbots into two main categories based on goals, including task-oriented chatbots and non-task oriented chatbot. They also classify chatbot based on its development technique into three main categories, including rule-based, retrieval-based, and generative-based approach. Furthermore, they also summarized the detailed technique on these three main approaches. Discussion and Conclusion In this work, a systematic review of emotionally-aware chatbots is proposed. We focus on three main issues, including, how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance. The rise of EAC was started by Parry, which uses a simple rule-based approach. Now, most of EAC are built by using a neural-based approach, by exploiting emotion classifier to detect emotion contained in the text. In the modern era, the development of EAC gains more attention since Emotion Generation Challenge shared task on NLPCC 2017. In this era, most EAC is developed by adopting encoder-decoder architecture with sequence-to-sequence learning. Some variant of the recurrent neural network is used in the learning process, including long-short-term memory (LSTM) and gated recurrent unit (GRU). There are also some datasets available for developing EAC now. However, the datasets are only available in English and Chinese. These datasets are gathered from various sources, including social media, online website and manual construction by crowdsourcing. Overall, the difference between these datasets and the common datasets for building chatbot is the presence of an emotion label. In addition, we also investigate the available affective resources which usually use in the emotion classification task. In this part, we only focus on English resources and found several resources from the old one such as LIWC and Emolex to the new one, including DepecheMood and EmoWordNet. In the final part, we gather information about how to evaluate the performance of EAC, and we can classify the approach into two techniques, including qualitative and quantitative assessment. For qualitative assessment, most studies used ISO 9241, which covers several aspects such as efficiency, effectiveness, and satisfaction. While in quantitative analysis, two techniques can be used, including automatic evaluation (by using perplexity) and manual evaluation (involving human judgement). Overall, we can see that effort to humanize chatbots by incorporation affective aspect is becoming the hot topic now. We also predict that this development will continue by going into multilingual perspective since up to now every chatbot only focusing on one language. Also, we think that in the future the studies of humanizing chatbot are not only utilized emotion information but will also focus on a contextual-aware chatbot.
how to incorporate affective information into chatbots, what are resources that available and can be used to build EAC, and how to evaluate EAC performance
d3014683dff9976b7c56b72203df99f0e27e9989
d3014683dff9976b7c56b72203df99f0e27e9989_0
Q: What evaluation metrics did they use? Text: Introduction Continuous distributional vectors for representing words (embeddings) BIBREF0 have become ubiquitous in modern, neural NLP. Cross-lingual representations BIBREF1 additionally represent words from various languages in a shared continuous space, which in turn can be used for Bilingual Lexicon Induction (BLI). BLI is often the first step towards several downstream tasks such as Part-Of-Speech (POS) tagging BIBREF2, parsing BIBREF3, document classification BIBREF4, and machine translation BIBREF5, BIBREF6, BIBREF7. Often, such shared representations are learned with a two-step process, whether under bilingual or multilingual settings (hereinafter BWE and MWE, respectively). First, monolingual word embeddings are learned over large swaths of text; such pre-trained word embeddings, in fact, are available for several languages and are widely used, like the fastText Wikipedia vectors BIBREF8. Second, a mapping between the languages is learned, in one of three ways: in a supervised manner if dictionaries or parallel data are available to be used for supervision BIBREF9, under minimal supervision e.g. using only identical strings BIBREF10, or even in a completely unsupervised fashion BIBREF11, BIBREF12. Both in bilingual and multilingual settings, it is common that one of the language embedding spaces is the target to which all other languages get aligned to (hereinafter “the hub"). We outline the details in Section SECREF2. Despite all the recent progress in learning cross-lingual embeddings, we identify a major shortcoming to previous work: it is by and large English-centric. Notably, most MWE approaches essentially select English as the hub during training by default, aligning all other language spaces to the English one. We argue and empirically show, however, that English is a poor hub language choice. In BWE settings, on the other hand, it is fairly uncommon to denote which of the two languages is the hub (often this is implied to be the target language). However, we experimentally find that this choice can greatly impact downstream performance, especially when aligning distant languages. This Anglocentricity is even more evident at the evaluation stage. The lexica most commonly used for evaluation are the MUSE lexica BIBREF12 which cover 45 languages, but with translations only from and into English. Even still, alternative evaluation dictionaries are also very English- and European-centric: BIBREF13 report results on English–Italian, BIBREF14 on English–German and English–Finnish, BIBREF11 on Spanish–English and Italian–English, and BIBREF15 between English and Italian, German, Finnish, Spanish, and Turkish. We argue that cross-lingual word embedding mapping methods should look beyond English for their evaluation benchmarks because, compared to all others, English is a language with disproportionately large available data and relatively poor inflectional morphology e.g., it lacks case, gender, and complex verbal inflection systems BIBREF16. These two factors allow for an overly easy evaluation setting which does not necessarily generalize to other language pairs. In light of this, equal focus should instead be devoted to evaluation over more diverse language pairs that also include morphologically rich and low-resource languages. With this work, we attempt to address these shortcomings, providing the following contributions: We show that the choice of the hub when evaluating on diverse language pairs can lead to significantly different performance (e.g., by more than 10 percentage points for BWE over distant languages). We also show that often English is a suboptimal hub for MWE. We identify some general guidelines for choosing a hub language which could lead to stronger baselines; less isometry between the hub and source and target embedding spaces mildly correlates with performance, as does typological distance (a measure of language similarity based on language family membership trees). For distant languages, multilingual systems should in most cases be preferred over bilingual ones. We provide resources for training and evaluation on non-Anglocentric language pairs. We outline a simple triangulation method with which we extend the MUSE dictionaries to an additional 2352 lexicons covering 49 languages, and we present results on a subset of them. We also create new evaluation lexica for under-resourced languages using Azerbaijani, Belarusian, and Galician as our test cases. We additionally provide recipes for creating such dictionaries for any language pair with available parallel data. Cross-Lingual Word Embeddings and Lexicon Induction In the supervised bilingual setting, as formulated by BIBREF1, given two languages $\mathcal {L} = \lbrace l_1,l_2\rbrace $ and their pre-trained row-aligned embeddings $\mathcal {X}_1, \mathcal {X}_2,$ respectively, a transformation matrix $$ is learned such that: The set $\Omega $ can potentially impose a constraint over $$, such as the very popular constraint of restricting it to be orthogonal BIBREF17. Previous work has empirically found that this simple formulation is competitive with other more complicated alternatives BIBREF17, BIBREF12. The orthogonality assumption ensures that there exists a closed-form solution in the form of the Singular Value Decomposition (SVD) of $_1_2^T$. Note that in this case only a single matrix $$ needs to be learned, because $\left\Vert _1 - ^{}_2 \right\Vert =\left\Vert ^{-1}_1 - _2 \right\Vert $, while at the same time a model that minimizes $\left\Vert _1 - _2 \right\Vert $ is as expressive as one minimizing $\left\Vert _1_1 - _2_2 \right\Vert $, and easier to learn. In the minimally supervised or even the unsupervised setting BIBREF11 the popular methods follow an iterative refinement approach BIBREF14. Starting with a seed dictionary (e.g. from identical strings BIBREF18 or numerals) an initial mapping is learned in the same manner as in the supervised setting. The initial mapping, in turn, is used to expand the seed dictionary with high confidence word translation pairs. The new dictionary is then used to learn a better mapping, and so forth the iterations continue until convergence. Similarly, in a multilingual setting, one could start with $N$ languages $\mathcal {L} = \lbrace l_1,l_2,\ldots ,l_N\rbrace $ and their respective pre-trained embeddings $\mathcal {X}_1, \mathcal {X}_2,\ldots ,_N$, and then learn $N-1$ bilingual mappings between a pre-selected target language and all others. Hence, one of the language spaces is treated as a target (the hub) and remains invariant, while all others are mapped into the (now shared) hub language space. Alternatively, those mappings could be jointly learned using the MAT+MPSR methods of BIBREF19 – also taking advantage of the inter-dependencies between any two language pairs. Importantly, though, there is no closed form solution for learning the joint mapping, hence a solution needs to be approximated with gradient-based methods. MAT+MPSR generalizes the adversarial approach of BIBREF11 to multiple languages, and also follows an iterative refinement approach. Similar to BIBREF19, BIBREF20 note that it is important to use all $N^2$ language pairs when optimizing multilingual alignments, rather than just the $N-1$ pairs between the hub and all other languages, and propose a model (UMH) that implements this intuition within a Wasserstein-Procrustes approach. Even though the architecture and modeling approach of MAT+MPSR and of UMH are not the same, the two methods are conceptually similar, as in both cases a language is chosen as the hub, and $N-1$ mappings for the other languages are learned. In either case, English is by default selected to be the hub. The only exception is the study of triplets alignments in BIBREF20, where Spanish is used as the hub for the Spanish–French–Portuguese triplet, although the authors mention that they found little differences due the choice of the hub, in contrast to our general findings. Other than MAT+MPSR and UMH, another unsupervised multilingual approach is that of BIBREF21, who propose to incrementally align multiple languages by adding each new language as a hub. We decided, though, against comparing to this method, because (a) their method requires learning $\mathcal {O}(N^2)$ mappings for relatively small improvements and (b) the order in which the languages are added is an additional hyperparameter that would explode the experimental space. Cross-Lingual Word Embeddings and Lexicon Induction ::: Lexicon Induction One of the most common downstream evaluation tasks for the learned cross-lingual word mappings is Lexicon Induction (LI), the task of retrieving the most appropriate word-level translation for a query word from the mapped embedding spaces. Specialized evaluation (and training) dictionaries have been created for multiple language pairs, with the MUSE dictionaries BIBREF12 most often used, providing word translations between English (En) and 48 other high- to mid-resource languages, as well as on all 30 pairs among 6 very similar Romance and Germanic languages (English, French, German, Spanish, Italian, Portuguese). Given the mapped embedding spaces, the translations are retrieved using a distance metric, with Cross-Lingual Similarity Scaling BIBREF12 as the most common and best performing in the literature. Intuitively, CSLS decreases the scores of pairs that lie in dense areas, increasing the scores of rarer words (which are harder to align). The retrieved pairs are compared to the gold standard and evaluated using precision at $k$ (P@$k$, evaluating how often the correct translation is within the $k$ retrieved nearest neighbours of the query). Throughout this work we report P@1, which is equivalent to accuracy, but we also provide results with P@5 and P@10 in the Appendix. Conclusion With this work we challenge the standard practices in learning cross-lingual word embeddings. We empirically showed that the choice of the hub language is an important parameter that affects lexicon induction performance in both bilingual (between distant languages) and multilingual settings. More importantly, we hope that by providing new dictionaries and baseline results on several language pairs, we will stir the community towards evaluating all methods in challenging scenarios that include under-represented language pairs. Towards this end, our analysis provides insights and general directions for stronger baselines for non-Anglocentric cross-lingual word embeddings.
we report P@1, which is equivalent to accuracy, we also provide results with P@5 and P@10 in the Appendix
ed522090941f61e97ec3a39f52d7599b573492dd
ed522090941f61e97ec3a39f52d7599b573492dd_0
Q: What is triangulation? Text: Introduction Continuous distributional vectors for representing words (embeddings) BIBREF0 have become ubiquitous in modern, neural NLP. Cross-lingual representations BIBREF1 additionally represent words from various languages in a shared continuous space, which in turn can be used for Bilingual Lexicon Induction (BLI). BLI is often the first step towards several downstream tasks such as Part-Of-Speech (POS) tagging BIBREF2, parsing BIBREF3, document classification BIBREF4, and machine translation BIBREF5, BIBREF6, BIBREF7. Often, such shared representations are learned with a two-step process, whether under bilingual or multilingual settings (hereinafter BWE and MWE, respectively). First, monolingual word embeddings are learned over large swaths of text; such pre-trained word embeddings, in fact, are available for several languages and are widely used, like the fastText Wikipedia vectors BIBREF8. Second, a mapping between the languages is learned, in one of three ways: in a supervised manner if dictionaries or parallel data are available to be used for supervision BIBREF9, under minimal supervision e.g. using only identical strings BIBREF10, or even in a completely unsupervised fashion BIBREF11, BIBREF12. Both in bilingual and multilingual settings, it is common that one of the language embedding spaces is the target to which all other languages get aligned to (hereinafter “the hub"). We outline the details in Section SECREF2. Despite all the recent progress in learning cross-lingual embeddings, we identify a major shortcoming to previous work: it is by and large English-centric. Notably, most MWE approaches essentially select English as the hub during training by default, aligning all other language spaces to the English one. We argue and empirically show, however, that English is a poor hub language choice. In BWE settings, on the other hand, it is fairly uncommon to denote which of the two languages is the hub (often this is implied to be the target language). However, we experimentally find that this choice can greatly impact downstream performance, especially when aligning distant languages. This Anglocentricity is even more evident at the evaluation stage. The lexica most commonly used for evaluation are the MUSE lexica BIBREF12 which cover 45 languages, but with translations only from and into English. Even still, alternative evaluation dictionaries are also very English- and European-centric: BIBREF13 report results on English–Italian, BIBREF14 on English–German and English–Finnish, BIBREF11 on Spanish–English and Italian–English, and BIBREF15 between English and Italian, German, Finnish, Spanish, and Turkish. We argue that cross-lingual word embedding mapping methods should look beyond English for their evaluation benchmarks because, compared to all others, English is a language with disproportionately large available data and relatively poor inflectional morphology e.g., it lacks case, gender, and complex verbal inflection systems BIBREF16. These two factors allow for an overly easy evaluation setting which does not necessarily generalize to other language pairs. In light of this, equal focus should instead be devoted to evaluation over more diverse language pairs that also include morphologically rich and low-resource languages. With this work, we attempt to address these shortcomings, providing the following contributions: We show that the choice of the hub when evaluating on diverse language pairs can lead to significantly different performance (e.g., by more than 10 percentage points for BWE over distant languages). We also show that often English is a suboptimal hub for MWE. We identify some general guidelines for choosing a hub language which could lead to stronger baselines; less isometry between the hub and source and target embedding spaces mildly correlates with performance, as does typological distance (a measure of language similarity based on language family membership trees). For distant languages, multilingual systems should in most cases be preferred over bilingual ones. We provide resources for training and evaluation on non-Anglocentric language pairs. We outline a simple triangulation method with which we extend the MUSE dictionaries to an additional 2352 lexicons covering 49 languages, and we present results on a subset of them. We also create new evaluation lexica for under-resourced languages using Azerbaijani, Belarusian, and Galician as our test cases. We additionally provide recipes for creating such dictionaries for any language pair with available parallel data. Cross-Lingual Word Embeddings and Lexicon Induction In the supervised bilingual setting, as formulated by BIBREF1, given two languages $\mathcal {L} = \lbrace l_1,l_2\rbrace $ and their pre-trained row-aligned embeddings $\mathcal {X}_1, \mathcal {X}_2,$ respectively, a transformation matrix $$ is learned such that: The set $\Omega $ can potentially impose a constraint over $$, such as the very popular constraint of restricting it to be orthogonal BIBREF17. Previous work has empirically found that this simple formulation is competitive with other more complicated alternatives BIBREF17, BIBREF12. The orthogonality assumption ensures that there exists a closed-form solution in the form of the Singular Value Decomposition (SVD) of $_1_2^T$. Note that in this case only a single matrix $$ needs to be learned, because $\left\Vert _1 - ^{}_2 \right\Vert =\left\Vert ^{-1}_1 - _2 \right\Vert $, while at the same time a model that minimizes $\left\Vert _1 - _2 \right\Vert $ is as expressive as one minimizing $\left\Vert _1_1 - _2_2 \right\Vert $, and easier to learn. In the minimally supervised or even the unsupervised setting BIBREF11 the popular methods follow an iterative refinement approach BIBREF14. Starting with a seed dictionary (e.g. from identical strings BIBREF18 or numerals) an initial mapping is learned in the same manner as in the supervised setting. The initial mapping, in turn, is used to expand the seed dictionary with high confidence word translation pairs. The new dictionary is then used to learn a better mapping, and so forth the iterations continue until convergence. Similarly, in a multilingual setting, one could start with $N$ languages $\mathcal {L} = \lbrace l_1,l_2,\ldots ,l_N\rbrace $ and their respective pre-trained embeddings $\mathcal {X}_1, \mathcal {X}_2,\ldots ,_N$, and then learn $N-1$ bilingual mappings between a pre-selected target language and all others. Hence, one of the language spaces is treated as a target (the hub) and remains invariant, while all others are mapped into the (now shared) hub language space. Alternatively, those mappings could be jointly learned using the MAT+MPSR methods of BIBREF19 – also taking advantage of the inter-dependencies between any two language pairs. Importantly, though, there is no closed form solution for learning the joint mapping, hence a solution needs to be approximated with gradient-based methods. MAT+MPSR generalizes the adversarial approach of BIBREF11 to multiple languages, and also follows an iterative refinement approach. Similar to BIBREF19, BIBREF20 note that it is important to use all $N^2$ language pairs when optimizing multilingual alignments, rather than just the $N-1$ pairs between the hub and all other languages, and propose a model (UMH) that implements this intuition within a Wasserstein-Procrustes approach. Even though the architecture and modeling approach of MAT+MPSR and of UMH are not the same, the two methods are conceptually similar, as in both cases a language is chosen as the hub, and $N-1$ mappings for the other languages are learned. In either case, English is by default selected to be the hub. The only exception is the study of triplets alignments in BIBREF20, where Spanish is used as the hub for the Spanish–French–Portuguese triplet, although the authors mention that they found little differences due the choice of the hub, in contrast to our general findings. Other than MAT+MPSR and UMH, another unsupervised multilingual approach is that of BIBREF21, who propose to incrementally align multiple languages by adding each new language as a hub. We decided, though, against comparing to this method, because (a) their method requires learning $\mathcal {O}(N^2)$ mappings for relatively small improvements and (b) the order in which the languages are added is an additional hyperparameter that would explode the experimental space. Cross-Lingual Word Embeddings and Lexicon Induction ::: Lexicon Induction One of the most common downstream evaluation tasks for the learned cross-lingual word mappings is Lexicon Induction (LI), the task of retrieving the most appropriate word-level translation for a query word from the mapped embedding spaces. Specialized evaluation (and training) dictionaries have been created for multiple language pairs, with the MUSE dictionaries BIBREF12 most often used, providing word translations between English (En) and 48 other high- to mid-resource languages, as well as on all 30 pairs among 6 very similar Romance and Germanic languages (English, French, German, Spanish, Italian, Portuguese). Given the mapped embedding spaces, the translations are retrieved using a distance metric, with Cross-Lingual Similarity Scaling BIBREF12 as the most common and best performing in the literature. Intuitively, CSLS decreases the scores of pairs that lie in dense areas, increasing the scores of rarer words (which are harder to align). The retrieved pairs are compared to the gold standard and evaluated using precision at $k$ (P@$k$, evaluating how often the correct translation is within the $k$ retrieved nearest neighbours of the query). Throughout this work we report P@1, which is equivalent to accuracy, but we also provide results with P@5 and P@10 in the Appendix. Conclusion With this work we challenge the standard practices in learning cross-lingual word embeddings. We empirically showed that the choice of the hub language is an important parameter that affects lexicon induction performance in both bilingual (between distant languages) and multilingual settings. More importantly, we hope that by providing new dictionaries and baseline results on several language pairs, we will stir the community towards evaluating all methods in challenging scenarios that include under-represented language pairs. Towards this end, our analysis provides insights and general directions for stronger baselines for non-Anglocentric cross-lingual word embeddings.
Answer with content missing: (Chapter 3) The concept can be easily explained with an example, visualized in Figure 1. Consider the Portuguese (Pt) word trabalho which, according to the MUSE Pt–En dictionary, has the words job and work as possible En translations. In turn, these two En words can be translated to 4 and 5 Czech (Cs) words respectively. By utilizing the transitive property (which translation should exhibit) we can identify the set of 7 possible Cs translations for the Pt word trabalho.
5d164651a4aed7cf24d53ba9685b4bee8c965933
5d164651a4aed7cf24d53ba9685b4bee8c965933_0
Q: What languages are explored in this paper? Text: Introduction Continuous distributional vectors for representing words (embeddings) BIBREF0 have become ubiquitous in modern, neural NLP. Cross-lingual representations BIBREF1 additionally represent words from various languages in a shared continuous space, which in turn can be used for Bilingual Lexicon Induction (BLI). BLI is often the first step towards several downstream tasks such as Part-Of-Speech (POS) tagging BIBREF2, parsing BIBREF3, document classification BIBREF4, and machine translation BIBREF5, BIBREF6, BIBREF7. Often, such shared representations are learned with a two-step process, whether under bilingual or multilingual settings (hereinafter BWE and MWE, respectively). First, monolingual word embeddings are learned over large swaths of text; such pre-trained word embeddings, in fact, are available for several languages and are widely used, like the fastText Wikipedia vectors BIBREF8. Second, a mapping between the languages is learned, in one of three ways: in a supervised manner if dictionaries or parallel data are available to be used for supervision BIBREF9, under minimal supervision e.g. using only identical strings BIBREF10, or even in a completely unsupervised fashion BIBREF11, BIBREF12. Both in bilingual and multilingual settings, it is common that one of the language embedding spaces is the target to which all other languages get aligned to (hereinafter “the hub"). We outline the details in Section SECREF2. Despite all the recent progress in learning cross-lingual embeddings, we identify a major shortcoming to previous work: it is by and large English-centric. Notably, most MWE approaches essentially select English as the hub during training by default, aligning all other language spaces to the English one. We argue and empirically show, however, that English is a poor hub language choice. In BWE settings, on the other hand, it is fairly uncommon to denote which of the two languages is the hub (often this is implied to be the target language). However, we experimentally find that this choice can greatly impact downstream performance, especially when aligning distant languages. This Anglocentricity is even more evident at the evaluation stage. The lexica most commonly used for evaluation are the MUSE lexica BIBREF12 which cover 45 languages, but with translations only from and into English. Even still, alternative evaluation dictionaries are also very English- and European-centric: BIBREF13 report results on English–Italian, BIBREF14 on English–German and English–Finnish, BIBREF11 on Spanish–English and Italian–English, and BIBREF15 between English and Italian, German, Finnish, Spanish, and Turkish. We argue that cross-lingual word embedding mapping methods should look beyond English for their evaluation benchmarks because, compared to all others, English is a language with disproportionately large available data and relatively poor inflectional morphology e.g., it lacks case, gender, and complex verbal inflection systems BIBREF16. These two factors allow for an overly easy evaluation setting which does not necessarily generalize to other language pairs. In light of this, equal focus should instead be devoted to evaluation over more diverse language pairs that also include morphologically rich and low-resource languages. With this work, we attempt to address these shortcomings, providing the following contributions: We show that the choice of the hub when evaluating on diverse language pairs can lead to significantly different performance (e.g., by more than 10 percentage points for BWE over distant languages). We also show that often English is a suboptimal hub for MWE. We identify some general guidelines for choosing a hub language which could lead to stronger baselines; less isometry between the hub and source and target embedding spaces mildly correlates with performance, as does typological distance (a measure of language similarity based on language family membership trees). For distant languages, multilingual systems should in most cases be preferred over bilingual ones. We provide resources for training and evaluation on non-Anglocentric language pairs. We outline a simple triangulation method with which we extend the MUSE dictionaries to an additional 2352 lexicons covering 49 languages, and we present results on a subset of them. We also create new evaluation lexica for under-resourced languages using Azerbaijani, Belarusian, and Galician as our test cases. We additionally provide recipes for creating such dictionaries for any language pair with available parallel data. Cross-Lingual Word Embeddings and Lexicon Induction In the supervised bilingual setting, as formulated by BIBREF1, given two languages $\mathcal {L} = \lbrace l_1,l_2\rbrace $ and their pre-trained row-aligned embeddings $\mathcal {X}_1, \mathcal {X}_2,$ respectively, a transformation matrix $$ is learned such that: The set $\Omega $ can potentially impose a constraint over $$, such as the very popular constraint of restricting it to be orthogonal BIBREF17. Previous work has empirically found that this simple formulation is competitive with other more complicated alternatives BIBREF17, BIBREF12. The orthogonality assumption ensures that there exists a closed-form solution in the form of the Singular Value Decomposition (SVD) of $_1_2^T$. Note that in this case only a single matrix $$ needs to be learned, because $\left\Vert _1 - ^{}_2 \right\Vert =\left\Vert ^{-1}_1 - _2 \right\Vert $, while at the same time a model that minimizes $\left\Vert _1 - _2 \right\Vert $ is as expressive as one minimizing $\left\Vert _1_1 - _2_2 \right\Vert $, and easier to learn. In the minimally supervised or even the unsupervised setting BIBREF11 the popular methods follow an iterative refinement approach BIBREF14. Starting with a seed dictionary (e.g. from identical strings BIBREF18 or numerals) an initial mapping is learned in the same manner as in the supervised setting. The initial mapping, in turn, is used to expand the seed dictionary with high confidence word translation pairs. The new dictionary is then used to learn a better mapping, and so forth the iterations continue until convergence. Similarly, in a multilingual setting, one could start with $N$ languages $\mathcal {L} = \lbrace l_1,l_2,\ldots ,l_N\rbrace $ and their respective pre-trained embeddings $\mathcal {X}_1, \mathcal {X}_2,\ldots ,_N$, and then learn $N-1$ bilingual mappings between a pre-selected target language and all others. Hence, one of the language spaces is treated as a target (the hub) and remains invariant, while all others are mapped into the (now shared) hub language space. Alternatively, those mappings could be jointly learned using the MAT+MPSR methods of BIBREF19 – also taking advantage of the inter-dependencies between any two language pairs. Importantly, though, there is no closed form solution for learning the joint mapping, hence a solution needs to be approximated with gradient-based methods. MAT+MPSR generalizes the adversarial approach of BIBREF11 to multiple languages, and also follows an iterative refinement approach. Similar to BIBREF19, BIBREF20 note that it is important to use all $N^2$ language pairs when optimizing multilingual alignments, rather than just the $N-1$ pairs between the hub and all other languages, and propose a model (UMH) that implements this intuition within a Wasserstein-Procrustes approach. Even though the architecture and modeling approach of MAT+MPSR and of UMH are not the same, the two methods are conceptually similar, as in both cases a language is chosen as the hub, and $N-1$ mappings for the other languages are learned. In either case, English is by default selected to be the hub. The only exception is the study of triplets alignments in BIBREF20, where Spanish is used as the hub for the Spanish–French–Portuguese triplet, although the authors mention that they found little differences due the choice of the hub, in contrast to our general findings. Other than MAT+MPSR and UMH, another unsupervised multilingual approach is that of BIBREF21, who propose to incrementally align multiple languages by adding each new language as a hub. We decided, though, against comparing to this method, because (a) their method requires learning $\mathcal {O}(N^2)$ mappings for relatively small improvements and (b) the order in which the languages are added is an additional hyperparameter that would explode the experimental space. Cross-Lingual Word Embeddings and Lexicon Induction ::: Lexicon Induction One of the most common downstream evaluation tasks for the learned cross-lingual word mappings is Lexicon Induction (LI), the task of retrieving the most appropriate word-level translation for a query word from the mapped embedding spaces. Specialized evaluation (and training) dictionaries have been created for multiple language pairs, with the MUSE dictionaries BIBREF12 most often used, providing word translations between English (En) and 48 other high- to mid-resource languages, as well as on all 30 pairs among 6 very similar Romance and Germanic languages (English, French, German, Spanish, Italian, Portuguese). Given the mapped embedding spaces, the translations are retrieved using a distance metric, with Cross-Lingual Similarity Scaling BIBREF12 as the most common and best performing in the literature. Intuitively, CSLS decreases the scores of pairs that lie in dense areas, increasing the scores of rarer words (which are harder to align). The retrieved pairs are compared to the gold standard and evaluated using precision at $k$ (P@$k$, evaluating how often the correct translation is within the $k$ retrieved nearest neighbours of the query). Throughout this work we report P@1, which is equivalent to accuracy, but we also provide results with P@5 and P@10 in the Appendix. Conclusion With this work we challenge the standard practices in learning cross-lingual word embeddings. We empirically showed that the choice of the hub language is an important parameter that affects lexicon induction performance in both bilingual (between distant languages) and multilingual settings. More importantly, we hope that by providing new dictionaries and baseline results on several language pairs, we will stir the community towards evaluating all methods in challenging scenarios that include under-represented language pairs. Towards this end, our analysis provides insights and general directions for stronger baselines for non-Anglocentric cross-lingual word embeddings.
Unanswerable
4670e1be9d6a260140d055c7685bce365781d82b
4670e1be9d6a260140d055c7685bce365781d82b_0
Q: Did they experiment with the dataset on some tasks? Text: Introduction Named-entity recognition (NER) is an information extraction (IE) task that aims to detect and categorize entities to pre-defined types in a text. On the other hand, the goal of text categorization (TC) is to assign correct categories to texts based on their content. Most NER and TC studies focus on English, hence accessing available English datasets is not a issue. However, the annotated datasets for Turkish NER and TC are scarce. It is hard to manually construct datasets for these tasks due to excessive human effort, time and budget. In this paper, our motivation is to construct an automatically annotated dataset that would be very useful for NER and TC researches in Turkish. The emergence of structured and linked semantic knowledge bases (KBs) provide an important opportunity to overcome these problems. Approaches that leverage such KBs can be found in literature BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . However, using the structured data from KBs is a challenging task for linking named entities and domains to raw texts due to ambiguous texts and named entities BIBREF4 . In this work, we publish TWNERTC dataset in which named entities and categories of sentences have been automatically annotated. We use Turkish Wikipedia dumps as the text source and Freebase to construct a large-scale gazetteers to map fine-grained types to entities. To overcome noisy and ambiguous data, we leverage domain information which is given by Freebase and develop domain-independent and domain-dependent methodologies. All versions of datasets can be downloaded from our project web-page. Our main contributions are (1) the publication of Turkish corpus for coarse-grained and fine-grained NER, and TC research, (2) six different versions of corpus according to noise reduction methodology and entity types, (3) an analysis of the corpus and (4) benchmark comparisons for NER and TC tasks against human annotators. To the best of our knowledge, these datasets are the largest datasets available for Turkish NER ad TC tasks. The rest of the paper is organized as follows: In Section 2, we briefly investigate the literature about NER, TC and datasets which are used in these research. In Section 3, we explain the construction of large-scale gazetteers by using Freebase. In Section 4, we explain how to use the gazetteers to automatically annotate and categorize Wikipedia texts to construct datasets along with dataset statistics, and noise reduction methodologies. Our evaluation about the quality of these constructed datasets are reported in Section 5. Related Work Named Entity Recognition (NER) and Text Classification (TC) are well-researched NLP tasks relevant to large amount of information retrieval and semantic applications. TC research predates to '60s; however, it is accepted as a research field in '90s with the advances in technology and learning algorithms BIBREF5 . On the contrary, the classical NER task is defined in MUC BIBREF6 and CoNLL BIBREF7 conferences with coarse-grained entity types: person, location, organization and misc. In addition, few studies address the problem of fine-grained NER where the challenge is to capturing more than four entity types BIBREF8 , BIBREF9 , BIBREF10 . As research on NER has been pushing the limits of automated systems performing named-entity recognition, the need for annotated datasets and benchmarks is also increasing. Knowledge bases are important for NLP researches, since they provide a structured schema of topics that can be used to annotate entities with fine-grained types and/or categorize raw texts into related domains. Steinmetz et al. BIBREF11 published benchmark evaluations that compare three datasets that use semantic information from KBs: DBpedia Spotlight BIBREF3 , KORE50 BIBREF2 , BIBREF12 and the Wikilinks Corpus BIBREF13 . These datasets are in English and constructed with the aim of evaluating the performance of NER systems. The authors present the statistics of each dataset and baseline performances of various algorithms. There are other methodologies which leverages KBs to named entity extraction and linking; however, most of them are not available to public BIBREF2 , BIBREF0 . Constructing a comprehensive dataset for TC is tougher than NER since there is no limit for the number of categories that are represented in such sets. In general, there are many TC datasets available in English for many different problems such as sentiment analysis BIBREF14 and categorizing gender BIBREF15 . The largest and the most popular dataset among them is Reuters Corpus Volume 1 (RCV1) which consists of manually categorized 800K news stories with over 100 sub-categories under three different main categories BIBREF16 . This version the dataset has problems with document categories and suffers from lack of documentation about the dataset. Lewis et al. propose an improved version of this dataset with reduced categorization mistakes and provide a better documentation BIBREF17 . Research on Turkish NER and TC are very limited compared to English and several other languages. The main reason is the lack of accessibility and usability of both Turkish NER and TC datasets. The most popular Turkish NER dataset is introduced by Gökhan et al. BIBREF18 . This dataset contains articles from newspapers, approximately 500K words, and is manually annotated with coarse-grained entity types. Tatar and Çiçekli propose another coarse-grained NER dataset BIBREF19 ; however, it contains only 55K words which makes this dataset to less preferable than previous dataset. More recent studies focus on Turkish NER in social media texts BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . Due to the research focus in the field, several Twitter-based coarse-grained NER datasets are published BIBREF21 , BIBREF24 , BIBREF25 . According to our knowledge, there is no literature available regarding to fine-grained NER in Turkish. Turkish TC researchers tend to construct their own, case specific datasets in general BIBREF26 . The newspapers are the main text source for such studies since they are easy to obtain and classify manually BIBREF27 , BIBREF28 , BIBREF29 . When the amount of annotated data is considered to train state-of-the-art learning algorithms, aforementioned Turkish datasets suffer from the lack of enough data. The main bottlenecks are requires human effort and time constraint, which limits the size and scope of the constructed datasets. In contrast, our aim is to provide larger, more comprehensive and insightful Turkish datasets for both NER and TC by using knowledge bases to create large-scale gazetteers and eliminating human factor in the annotation process. In the next section, we will explain our dataset construction methodology starting with building gazetteers by using Freebase and Wikipedia. We will investigate how to build a graph crawler algorithm to crawl the knowledge in Freebase and to map its entity types and domain information into the raw texts automatically. We will also discuss noise reduction methods and propose three different versions of the dataset. Constructing Gazetteers Gazetteers, or entity dictionaries, are important sources for information extraction since they store large numbers of entities and cover vast amount of different domains. KBs use a graph structure that is used to represent thousands of films and/or millions of songs in gazetteers. Hence, such graph structures can be efficient to construct large-scale gazetteers. In our work, we use Freebase as the KB since it provides both quality and quantity in terms of entity types and domains for Turkish. Freebase has 77 domains that cover many different areas from music to meteorology with approximately 50M total entities. Among them, Turkish has 300K entities, and approximately 110K of them have a link to corresponding Turkish Wikipedia page. By using KBs, one can eliminate the necessity of creating semantic schema design, collecting data and manually annotating raw texts. However, such large entity lists contain ambiguous and inaccurate information that can impact the quality. For instance, both film and boat domains contain “Titanic" in their entity lists, and such ambiguity could create false links. Considering the number of entities, the importance of disambiguating named entities becomes more important. Our work is inspired from Heck et al. BIBREF0 who employ a method that takes advantage of user click logs of a web search engine in order to improve precision of gazetteers while maintaining recall. Since we do not have any sources to access such user click logs, we depend on attributes, such as entity domains, types and properties, that Freebase provides in order to improve the quality of our gazetteers. Note that a named entity's domain, type and property are represented as /domain/type/property in Freebase. Freebase distinguishes entities that exist in more than one domain by unique machine id (mid ). For instance, Titanic is an entity in both film and boat domains with two mids depending on the domain. On the other hand, there are cases in which same mids can be associated with multiple domains. For instance the mid of Titanic in the film domain is also used in the award domain. These cases occur when domains are closely related. Note that mid is language independent. Hence, an entity has the same mid regardless of its language; however, the information related to that entity can differ, e.g., missing equivalence of entities, translation differences. Domains covered in Freebase have large amount of entities and related descriptive texts. However, for Turkish, we need to filter or merge some domains due to insufficient number of related raw texts. For instance, we merge “American football", “cricket", “ice hockey" domains under already existing sports domain. During annotation, due to ambiguous cases in Freebase, we first compute the domain and the entity type distribution of directly related (first order) relations of the selected named entity. Among the candidates, the entity type that contains the most first order relations in the knowledge graph is selected as the type of the entity. This ensures that the most informative entity type is selected for the annotated entity. For instance, while annotating the Wikipedia page of “Titanic (film)" the possible candidates are /film/film and /award/award_winning_work. Our method chooses /film/film as the domain and type of the entity since it has more information compared to its competitor. After determining the entity type of each entity, we form large-scale gazetteers for Turkish which contain 300K named-entities. Each entity in the gazetteer has its Wikipedia description, determined type and 1st-order relations. Note that some entities may not have Wikipedia description due to several reasons, e.g. song names or deleted Wikipedia pages; however, we use such entities in the annotating process. Fine-Grained Annotations A knowledge graph consists of nodes and edges between nodes which are defined by the schema of a domain. Nodes are entities and edges represent relations between nodes, which are properties in Freebase. Examples of entities and relations are shown in Figure FIGREF5 , for “film" and “people" domains. The central node having the most number of relations in the film domain is the “film_name". We call directly connected relations to the central node as first-order relations. Name of the director who directed the movie is “film_director" relation. Since directors are people, they have also relations in “person" domain. Through such relationship we can get second-order relations about a movie. We benefit this graph structure along with the raw texts to create multi-purpose, annotated and categorized corpora. Inspired by the approach of BIBREF0 , we create a graph crawling algorithm that is capable of categorizing sentences into Freebase domains and annotate named entities within the sentences with Freebase types and properties. This set of annotations are the Fine-Grained Annotations (FGA). Our method consists 5 steps and is applicable to both English and Turkish (with slight changes for sentence processing). Select the central node from gazetteers. This node is “Central Pivot Node" (CPN). It is the main entity-type of the domain, e.g. film names in film domain. Retrieve the descriptions provided by Wikipedia or Freebase. If Wikipedia has the corresponding page, fetch full texts from dump. Else if only Freebase description is available, use the description. Otherwise, return to the first step. Since several Turkish Wikipedia texts are direct copy of English version, language detection is applied on all texts and English texts are eliminated. Extract sentences from raw texts and annotate sentences by longest-string pattern matching where the first-order relations of CPN are used resulting in IOB style annotation. We use Aktaş and Çebi's sentence detection method to extract sentences from full texts BIBREF30 . Extend annotation process with second-order relations. Categorize sentences by selecting the domain of entities that is used the most. One should consider that higher order relations can create a long chain of relationships. In our work, we limit this chain with the second-order relations, since further relations provide less information while increasing ambiguity and inconsistency of automated annotations. Statistics of the FGA TWNERTC contains 300K entities in total of which 110K have Turkish descriptions (from Freebase or Wikipedia dump). There are totally 700171 annotated sentences from 49 different domains with the largest and smallest domains being people (139K sentences) and fashion (493 sentences) subsequently. TWNERTC consists of 10997037 tokens without punctuation and among these tokens approximately 2M of them are annotated. 16K of the tags are unique which results in approximately 332 unique entity type per domain on the average. Note that, even if a sentence is categorized into the film domain, it can contain entities from other domains, such as person and time. Location is the domain having the highest number of unique tags (1051) whereas physics has the least amount of unique tags (15). Disambiguate Noisy Entity Types We refine the gazetteers to minimize the effect of noise in the generated annotations and assigned domains. However, due to the nature of automated algorithm, we find that entity annotations have still inconsistent or missing types while categorization process is resulted with more accurate in general. In order to reduce remaining noise, we apply both domain dependent and domain independent noise reduction. Domain dependent approach finds the most common entity type of every entity according to the domain of the sentences. Then, entities are re-annotated with the common entity types. Domain independent approach follows the same process without using domain information while eliminating the noisy information. Statistics about these two versions of the dataset are presented in Table TABREF7 . Transform FGA to Coarse-Grained Annotation FGA provides fine-grained annotations with many detailed entity types and properties. However, the amount of different entity types affects the learning algorithms negatively. Moreover, it is hard to evaluate the quality of the annotations when most works in literature performs coarse-grained entity recognition. Thus, we provide a coarse-grained version of the FGA datasets. In order to transform FGA to coarse-grained annotation (CGA), we map each fine-grained entity type in each domain to a coarse-grained entity type, i.e. person, organization, location and misc. We keep the IOB notations in types while converting the type. In this process, we eliminate several domains, such as meteorology, interests and chemistry, due to the lack of types that can be mapped to a coarse-grained version. This elimination process leaves 25 unique domains in CGA-based datasets. In coarse version of the dataset, there are approximately 380K sentences. Similar to FGA statistics, the people domain has the highest number of sentences (104508) while law domain has the least number of sentences (454) among all 25 domains. The number of tokens is 7326286 with punctuation. Among these tokens, 851123 of them are annotated. Unlike FGA, CGA has only 4 types for each domain. 19 of the domains contains all 4 types whereas the remaining domains might contain 2 or 3 entity types such as geography and food. We also transform the post-processed datasets we introduced in previous chapter to CGA. Table TABREF16 presents the details of created corpora with CGA. In total, we publish six datasets (one original and two post-processed with FGA, one original and two post-processed with CGA). Evaluation To experimentally evaluate TWNERTC, five human annotators categorize and annotate test sets that are sampled from the datasets. We create six test sets (3 CGA, 3 FGA) with 10K-word for NER and one test with 2K sentences for TC. We compare annotations and domains of these sets with manually created ground truths. For NER evaluation, we follow two different approaches for coarse-grained and fine-grained versions. While we evaluate the CGA versions against manually annotated ground-truths, it is an almost impossible to evaluate FGA versions. Hence, we train a fine-grained NER model to predict top-5 possible entity types for all entities and human annotators create ground-truths by using these predictions. For both cases, we extract 10K-word test sets for all three versions (original and post-processed). Note that, test sets are not identical but randomly selected sentences from the datasets. In addition, we exclude IOB tags since we prioritize evaluating entity type agreement in our evaluation results. For evaluating CGA versions, given the sentence and the corresponding automatically created annotation, annotators are allowed to change any entity type with one of the five possible types, i.e. person, organization, location, misc or O (out). Finally, we merge the results of all annotators such that if at least 3 annotators agree on the same type for an entity, that type is the ground-truth. If there is no agreement, we keep the entity type as it is. For evaluating FGA, we use a fine-grained entity recognizer, FIGER BIBREF31 , to create ground-truths. It is not an ideal evaluation approach since FIGER is designed for English and have not been tested for other languages according to our knowledge. However, more than thousands of different entity types are available in TWNERTC, and it is impracticable to ask human annotators to construct such fine-grained ground-truths manually from scratch. Therefore, we train a Turkish fine-grained model by using all remaining sentences and predicted possible types for entities in the test sets. Then, we ask human annotators to rank types of an entity from the most relevant to the least relevant. They are also allowed to suggest alternative types different than the given choices. We randomly sample 2K sentences from the unmodified corpus as TC test set. We train an internal classification algorithm with the rest of sentences and get top five predicted domains for the test sentences. We present predicted categories to human annotators and ask them to rank domains from the most relevant to the less relevant. They are also allowed to suggest different domains among the 49 domains which are represented in full corpus. Finally, we rank domains of each test sentence according to the annotator agreement and form the test set such that each sentence has 5 possible domains where first domain is the most relevant domain. Evaluation Results for Coarse-Grained Annotations In Table TABREF19 , we present the number of entity types exist in automatically and manually annotated sets with the number of changes that annotators have made. We define changes from type O to any other type is an addition and opposite of this action is a removal. Misc is the most added, removed and changed type by annotators. It is an acceptable outcome since this specific type covers vast amount of entities except person, location and organization. We do not consider the number of added entity types as a major problem, since our gazetteers do not have infinite information and we have future plans to improve it. However, miss-annotated types are the real danger since if the amount of such mistakes increase, performance of the learning algorithms is affected negatively. In "change" type of , annotators change misc type to mainly organization and person. As a conclusion, among the automatically annotated coarse-grained entity types, there are %76 matching ratio without O tags and manually added types. Additionally, we present precision, recall and F-score values in Table TABREF21 . The dataset with domain-dependent post-process provides better NER performance compared to other two versions in general. Moreover, both post-processing methods improve the performance compared to the original dataset. Further, it can be observed that misc type is has the lowest F-Score among all types in all versions; however, this result is expected since misc's coverage is larger than the other three types and makes it more vulnerable to mismatches. Evaluation Results for Fine-Grained Annotations As we discuss earlier, we train an external fine-grained algorithm, FIGER, to create Turkish NER models for all versions of the TWNERTC. By using the resulted models, we take five possible predicted type for each entity that are represented in test sets, and provide them as ground-truths to five human annotators. Obviously, FIGER designed to solve fine-grained NER in English; however, to evaluate the automated fine-grained types in a reasonable time, we leverage predictions of this algorithm. In Table TABREF23 , we present the F1-scores of trained models on each test set. We provide these scores to give a better insight to researchers about ground-truths. Note that, strict represents the original F1-score formula, while loose macro and loose micro scores represent variations of the same formula BIBREF31 . It can be observed that the model trained with original TWNERTC performs poorly compared to post-processed versions. Since domain independent (DI) noise reduction method ensures that an entity can have only one type, it improves the performance more than the domain dependent (DD) method. Table TABREF24 presents the human annotators evaluation on automated fine-grained NER datasets, given FIGER predictions as possible ground-truths. Annotators rank the provided ground-truths and we check their ranking agreements. Eventually, top-1 agreement is hard to fulfill since our gazetteers contains thousands of entity types, and an entity may have more than ten different possible options. On the other hand, top-5 agreements provide promising results considering the amount of possible ground-truths. Evaluation Results for TC Original TWNERTC contains 49 different domains. The number of domains causes ambiguities in categorization among sentences depending on the context understanding. Thus, we evaluate three levels of accuracy. From Table TABREF26 , we can see that automatically assigned domains have relatively low direct matches with annotators according to top-1 score. However, when we observe top-3, accuracy scores are doubled. Furthermore, error rates are less than %2 when all five ground truths are considered. Note that in top-3 and top-5, we only considered whether automatically assigned domain exists or not in the ground truth list. Amount of the difference between from top-1 to top-3 scores is mainly caused by annotators' different understanding of the sentence context. For instance, a sentence about Lionel Messi's birth location is categorized as people in the test set. Whereas ground-truths start with soccer and followed by people and location. In addition, similar domains, such as sports and soccer, are also the reason of this difference. In overall, results validates that automatically assigned domains are likely similar to what human annotators suggest to corresponding sentences. However, while automation process is limited to the context, humans can use their knowledge when suggesting domains. Hence, low accuracy in top-1 score is not a mistake of the methodology but a shortcoming in the process. Conclusion and Future Research Directions We have described six, publicly available corpora for NER and TC tasks in Turkish. The data consists of Wikipedia texts which are annotated and categorized according to the entity and domain information extracted from Freebase. We explain the process to construct the datasets and introduce methodologies to eliminate noisy and incorrect data. We provide comprehensive statistics about dataset content. We analyzed subsets from these datasets and evaluate automatically created annotations and domains against manually created ground-truths. The final results show that automatic annotations and domains are quite similar to the ground-truths. The obvious next step is to develop learning algorithms for NER and TC tasks to find baselines using traditional machine learning algorithms, and extending these baselines with approaches. Since TWNERTC provides a vast amount of structured data for researchers, deep learning methods can be exploited to solve fine-grained NER problem. Acknowledgments This project is partially funded by 3140951 numbered TUBITAK-TEYDEB (The Scientific and Technological Research Council of Turkey – Technology and Innovation Funding Programs Directorate).
Yes
c2da598346b74541c78ecff5c9586b3857dd01b5
c2da598346b74541c78ecff5c9586b3857dd01b5_0
Q: How better does the hybrid tiled CNN model perform than its counterparts? Text: INTRODUCTION Sentiment analysis or opinion mining is a sort of text classification that assigns a sentiment orientation to documents based on the detected contextual polarity BIBREF0. In the past, research work has focused only on the overall sentiment of a document, trying to determine if the entire text is positive, neutral or negative BIBREF1. However, besides predicting the general sentiment, a better understanding of the reviews could be undertaken using the more in-depth aspect based sentiment analysis (ABSA) BIBREF2, BIBREF1. Specifically, ABSA’s goal is to predict the sentiment polarity of the corpus’ target entities and aspects (e.g. the possible aspects of the entity “movie” could be “the plot”, “the actors’ acting” or “the special effects”). While it is easy to establish the text entities based on the a priori information about text corpus, the aspect terms are difficult to infer and usually the training process requires to use a predefined set of term categories BIBREF3, BIBREF4, BIBREF5. In this paper, we propose a framework that applies the idea of ABSA only conceptually and tries to enhance the performance of one of the most popular baseline models for sentiment classification. More specifically, we adjust a convolutional neural network (CNN) to the ABSA requirements forcing it to capture more diverse features (both sentiment information and aspects). The CNN models were proposed for object recognition BIBREF6 and initially were used within computer vision. Soon they were adapted or integrated with other deep learning architectures to be compatible with a broad array of tasks, including NLP: classification BIBREF7, recognition of entailment and contraction between sentences BIBREF8 or question answering BIBREF9. Even if the CNN model already captures the most salient features by employing global max-pooling operations on multiple feature maps, the effectiveness of extraction depends not only on the number of feature maps but also on the initialisation of their filter weights. Usually, all the filter weights of a convolutional layer are initialised with small values, following a uniform distribution. The max-pooling operations can generate a smaller number of different features than the given number of feature maps as a result of using the same input and of applying filters with weights of the same distribution. Our approach to control better features extraction and to force the network to look for features in different regions of a text consists of applying the tiled CNN (TCNN), a variant of the CNN architecture which considers multiple filters per feature map. The TCNN is an improved CNN that was proposed for image processing to capture a wide range of picture invariances BIBREF10. Traditionally, the CNN architecture already learns the translational invariance due to its convolving operation but the TCNN goes forward and handles more complex cases such as viewpoint/rotation and size invariances. Since its development, the TCNN model has been mainly used for computer vision in tasks like image classification BIBREF11, object and emotion recognition BIBREF12, BIBREF13. The reason behind the TCNN’s restricted applicability is its nature of multiple invariances learner. In this paper we adjust the model to meet the NLP requirements and set each filter of the feature map to cover only a set of $n$-grams (not all $n$-grams as in the case of pure CNN models). Following a precise filter structure per feature map, the extraction of the most relevant features is more effective and depends less on the initial values of the weights. In addition to the TCNN, a new network called hybrid tiled CNN (HTCNN) is introduced to outweigh the disadvantage of the TCNN model's inflexible filter structure. Using the idea of more diverse feature extraction with multiple filters, we create word clusters and allow a filter of a feature map to convolve only on the words that appear in the same context (assigned to the same cluster) and on their $n – 1$ neighbouring words (where $n$ is the size of the $n$-gram or the filter size). In this paper, word clusters are computed using the expectation-maximization (EM) algorithm using Gaussian distribution. Since we do not include information about aspects, this approach allows us to identify only the document-level sentiment polarities. However, HTCNN’s structure with multiple filters per feature map can be easily adjusted to ABSA sentiment classification task. An interesting idea could be to replace general word clusters defined a priori with sentence-level word clusters defined with respect to each aspect based on attention scores. We let this extension for future work. Our contributions are summarized as follows: We adjust the TCNN (which so far has been used only for image processing) to the NLP field. We design a novel hybrid tiled CNN model with multiple filters per feature map. The filter structure is flexible and each filter of the feature map can convolve only on the similar words of a given word cluster and on its $n-1$ neighbouring words, where $n$ is the window size. Experimental results prove the effectiveness of the introduced model over the simple architecture of the CNN. The remainder of the paper is organized as follows. The next section discusses the related literature. The third section depicts the TCNN in NLP and its hybrid variant. The fourth section presents the details of the experiments and the results and the last one concludes the paper. The source code used to implement the proposed models can be found at https://github.com/mtrusca/HTCNN RELATED WORK In text classification and sentiment analysis, CNN and RNN are the two main deep neural network architectures that produce similar results. While the CNN extracts the most relevant $n$-grams, the RNN finds context dependencies within an entire sentence. Still, for sentence classification, especially sentiment analysis it is tempting to use CNN models since polarities are usually determined only by some keywords BIBREF15. Regarding the sentence’s length, both neural networks have similar performances, especially for the case of short text. While for long sentences, the RNN could be considered more adequate than the CNN, some studies prove the contrary BIBREF16, BIBREF17, arguing that the RNN is a biased model where the later words of a sequence are more important than the others. Currently, the CNN is a popular method for text classification BIBREF18, BIBREF19 and even models with little tuning of the hyperparameters provide competitive results BIBREF14, BIBREF20. The variant of the CNN applied for sentiment analysis task could vary from simple approaches BIBREF21 to more complex ones. For example, in BIBREF22, the idea of linguistic homophily is exploited and sentiment polarities are found by combining a single-layer CNN model with information about users’ social proximity. In BIBREF23, the results of a CNN sentiment model are improved by employing lexicon embeddings and attention mechanism. Regarding the ABSA task, CNNs are not as popular as memory networks like LSTM and GRU even if they have shown potentiality to address this problem recently. The CNN model was introduced to tackle an ABSA problem in BIBREF24 by using parameterized gates and parameterized filters to include aspect information. Similar, in BIBREF25, it was proved that a simple CNN model with Gated Tanh-ReLU units is more accurate than the traditional approach with LSTM and attention mechanism. Since not all datasets provide information about aspects, we try to increase the sensitivity of convolutional neural networks to the aspects and their related keywords by using a novel hybrid tiled CNN model. Our approach is similar to the one proposed in BIBREF26 in terms of multiple inputs for simultaneous convolutional layers but instead of using different word embeddings we create word clusters and define sentence representation for each one. MODELS The standard one-dimensional CNN architecture employed in BIBREF27 and since then widely used in many papers, assumes that the feed-forward neural network is a composition of convolutional layers with the max-pooling operation and one or more fully connected layers with non-linear activation functions. An input text of length $m$ is represented as a matrix obtained by concatenating word embedding vectors BIBREF14: This matrix is used as an input for a convolution operation that applies a filter $w$ on all possible windows of $n$ consecutive words at a given stride value. A new feature map is computed by applying a non-linear function $f$ with a bias term $b$ ($b$ $\in \mathbb {R}$) on the convolving results. Given the window of n words $X_{i:i+n-1}$, the generic object $C_i$ of the feature map is generated by: To capture the most important feature $C_{i}$, we apply the max-pooling operation. Usually, a single feature is not enough for depicting the entire sentence and therefore it is a common practice to generate multiple feature maps for each convolutional layer. All features, with the highest value for each feature map, input the following convolutional or fully connected layers. In terms of sentiment analysis, we are interested to extract not only sentiment information but also information about text corpus’ entities and their aspects (attributes) BIBREF0. Based on this need, the use of multiple feature maps per convolutional layer turns out to be a compulsory step. MODELS ::: TCNN In this paper, we attempt to introduce the TCNN in the field of textual classification. The idea behind the TCNN is simple and assumes that each feature map is computed using at least two filters BIBREF10. The map of filters applied on the input is not randomly chosen but it has to follow a structure that imposes for each filter to have different nearby filters, while the next and previous $k$-th filters have to be the same. Term $k$ is known as the tile size and it is a hyperparameter that indicates the number of filters corresponding to each feature map. The generic object $C_i$ of a feature map is defined as: If $k$ is 1 then the tiled convolutional layer turns into a simple convolutional layer and if $k$ is greater or equal to $m-n+1$ then we have a fully connected layer. While one of CNN's rules is parameter sharing, the tiled CNN follows it only partially so that weights are not entirely tied. To better understand the way the tiled CNN works with textual data we consider an example sentence (“We choose to go to the moon”) in Figure FIGREF4. We set the number of feature maps to three and the filter size and the tile size to two. The first filter convolves over the first two words ("we" and "choose"), then the second one covers the second and the third words ("choose" and "to"), then the first filter goes over the third and the fourth words ("to" and "go") and so on. This rule is applied for each feature map, which means that six weight matrices have to be initialised. Then a global max-pooling operation covers each of the three feature maps and the most representative $n$-grams are obtained (no matter the initial filter that identified them). The CNN's representation is similar to the one presented in Figure FIGREF4 but, unlike its tiled variant, there is only a filter per feature map (the tile size is equal to 1). The TCNN implementation can be seen as a neural network with multiple simultaneous convolutional layers. Each layer corresponds to a filter and uses a stride value equal to the tile size. An important note to this process is related to the shape of the input that feeds each convolutional layer. Generally, filter $i$ ($i \le k$) can slide from the $i$-th word to the word with the index equal to: where $\lfloor x \rfloor $ and $\lbrace x \rbrace $ represent the integer and decimal part of a number $x$, respectively. According to this rule, we have to adjust the input of each convolutional layer. This rule together with the constraint of the stride value enforces the filters to convolve on different $n$-grams and to cover together the entire input. Before moving further, we have to add two sets of pooling layers. The layers of the first set correspond to the $k$ convolutional layers. The second set has just one layer covering the previous concatenated results. By this means, it is assured that the multiple simultaneous CNN behaves just like a TCNN model with a single pooling operation. The reasoning behind using the TCNN model for text classification is the need to capture more diverse features (like aspects and sentiment words) that could take our results closer to the sentences’ labels. Even if this task is theoretically accomplished by the simple CNN, the risk of getting a smaller number of different features than the number of feature maps is high. On the other side, TCNN with its integrated multiple filters per feature map forces the model to look for features in diverse regions of a sequence enhancing the quality of extraction. By choosing different values of $k$, we get a palette of models that are a trade-off between estimating a small number of parameters and having more different features. MODELS ::: HTCNN The disadvantage of TCNN is the inflexible structure of filters sliding on the input that requires to have $k$ repetitive filters with no possibility to change the ordering. Knowing that words with semantic similarity are likely to appear in a similar context BIBREF28, one reasonable filter map could be the one that applies the same filter on all these words and other filters on the remaining words. For example in a sentence like “The lens of my digital camera have a powerful/weak optical zoom” it could be useful to apply the same filter on the “powerful” and “weak” words and different ones on the other words. Knowing that word embeddings are built to capture not only words’ context but also their syntactic and semantic similarity, words like “powerful” and “weak” have similar word embeddings and could be assigned to the same cluster. Using an appropriate clustering model, we can group the word vectors in $k$ clusters. In the experiments, the clustering solution is provided by the EM algorithm using Gaussian distribution. The reason behind this choice is that Gaussian mixture models support mixed membership and are robust against errors and flexible regarding cluster covariance BIBREF29. The EM algorithm has two steps that are repeatedly applied till the convergence is noticed. First, we compute the probability of a word to belong to a cluster, knowing that the words of clusters follow normal distributions (the expectation step). Second, we update the cluster means accordingly with the previously computed probabilities (the maximization step). For each cluster, we code the outside cluster words of a sentence with “PAD” and leave unchanged its associated words. Sequences generated based on indices of dictionary words, get 0 for “PAD” word and a value greater than 0 otherwise. In this way, we define $k$ inputs (one for each cluster) and use them to feed $k$ multiple simultaneous convolutional layers. The stride value constraint is used no more because simultaneous layers already convolve on different words. The problem of the last depicted neural network is the loss of the $n$-grams concept caused by the replacement of the some words’ indexes with 0 (words assigned to other clusters than the one associated with the given convolutional layer). This idea is supported by the assumption that words are dependent on their neighbouring words. To overcome this problem, for each convolutional layer we have to modify the input by adding in the left and in the right side (if it is possible) of each word (whose index is different than 0) its $n-1$ neighbouring words. The added words could be in the same cluster or not. We call this model, the hybrid tiled CNN (HTCNN). Considering the above sentence and setting the filter size and the number of clusters to two (the same as setting the tile size value for the TCNN), we assign the words “choose” and “moon” to the first cluster and the other words to the second one. Firstly we assign to the each cluster the same sentence and replace the words of the other cluster with “PAD”. The modified input sentences are: “PAD choose PAD PAD PAD PAD moon” and “we PAD to go to the PAD”. If we add the neighbors (a single word to the left and the right for bigram cases) of each word, the input sentences will change into “we choose to PAD PAD the moon” and “we choose to go to the moon”. The sentence of the first cluster gets the words “we” and “to” as the neighbors of the “choose” word and the word “the” as the neighbor of the “moon” word. The words of the second cluster are: “we”, “to”, “go”, “to”, “the” and the sentence gets the word “choose” as the neighbor of the words “we” and “to” and the word “moon” as the neighbor of the word “the”. This process is described in Figure FIGREF8, for representative reasons we use sentences instead of sequences. The colour of the cluster words assigned to each convolutional layer is red. The colour of the neighbouring words is blue. The remaining words are coded with the word "PAD". If the blue words are replaced with "PAD" words then we get a new model that does not consider the $n$-gram concept and it is called simple HTCNN (SHTCNN). EXPERIMENTAL RESULTS We test the TCNN and the HTCNN models on the IMDB movie reviews dataset BIBREF30. The collection has 50000 documents, evenly separated between negative and positive reviews. Our models based on convolution operation are compared to a one-layer one-dimensional CNN. The configuration of the CNN baseline model and of the proposed models is presented in Table TABREF13. Regarding word vectors initialisation, we use word embeddings trained on the current dataset with word2vec models BIBREF31. Each word is represented by a 200 dimension vector computed by concatenating a continuous bag-of-words (CBOW) 100 dimension vector with a skip-gram (SG) vector of the same size. Sentences are represented using zero-padding strategy which means we are applying the wide convolutions BIBREF18. Knowing that the IMBD dataset is balanced, the evaluation of models is done in terms of test accuracy. We use the binary cross entropy loss function and unfreeze the word embeddings during the training phase. Using 4-fold cross-validation, 10,000 reviews are allocated to the testing set and the rest of them are divided between training and validation sets. The cross-validation process has four iterations and at each time step other 10,000 reviews are allocated to the validation set. In addition to the TCNN and its hybrid variant, we present the results SHTCNN model to see how the lack of the neighbouring words affects the performance of the HTCNN. The results of our comparison between the baseline network and our models are listed in Table TABREF15. The baseline CNN has a volatile performance, unlike all other models. While for a filter size equal to two the model has one of the best test accuracies, for bigger sizes (trigram and fourgram cases) the performance decreases quickly and reaches the lowest threshold. All proposed networks have a natural behaviour and their performance grows proportionally with the filter size. Besides the $n$-gram dimension, the tile size or the number of word clusters has also impact on the results, generally leading to a gradual improvement. Results of the TCNN and of the SHTCNN are complementary and while the first one performs better when we set the tile size to three, the second one has better results for two word clusters. However, we note that the HTCNN achieves the best performance without having the drawbacks of the other two models: the inflexible filter structure of the the TCNN and the SHTCNN’s lack of $n$-grams. Natural, more filters per feature map makes the model more sensitive to different features but in the same way, increases the number of estimated parameters or the model complexity. Further on, the HTCNN is tested for larger tile sizes, precisely for four and five word clusters. The results are listed in Table TABREF16. Only the pair (k = 4, n = 2) has a slightly better result than the previous model variants with the same filter size (the pairs (k = 3, n = 2) and (k = 2, n = 2)). All other results are worse than the ones of HTCNN with the tile size equal to three and the difference is more significant for the case of five word clusters. We conclude that the optimal tile size value corresponding to the IMBD dataset that balances the model capacity to extract different features with its complexity is equal to three. To confirm the performance of HTCNN over single-layer CNN, we undertake a second test on the SemEval-2017 English dataset for task 4: Sentiment analysis in Twitter, subtask A: Message Polarity Classification BIBREF32. The dataset has 62,617 tweets with the following statistics: 22,277 positive tweets, 28,528 neutral tweets and 11,812 negative tweets. The approach with the highest rank on this subtask was proposed by Cliche BIBREF33 and ensembles a LSTM and a neural network with multiple simultaneous convolutional layers. Since the Cliche’s CNN structure is similar to the structure of HTCNN, we use some of his default settings: the length of the word embeddings, the activation functions, the number of filters per convolutional layer, the dropout layers (with the same dropout probabilities) and the number of epochs. The number of simultaneous convolutional layers utilizes in the Cliche’s structure is equal to three which means that we have to run our HTCNN model for the same number of filters. Table TABREF13 shows an overview of the models' architecture for the SemEval dataset. Word embeddings are trained using CBOW and SG models in a similar way as the one depicted above for the IMDB dataset and remain frozen over the entire training phase. 10% of the corpus tweets are allocated to the testing set while the remaining tweets are separated between validation and training sets using 9-fold cross-validation. Because the dataset is imbalanced, the evaluation is done using macro-average testing recall. To counteract the imbalance, the cross-entropy loss function uses class weights adjusted inversely proportional to the class frequencies in the training phase. Both the HTCNN and the CNN share the above hyperparameters. Similar to the IMDB experiments, we set the window size to two, three or four. The results of our comparison on the SemEval dataset are shown in Table TABREF17 and confirm the performance of HTCNN over simple CNN. CONCLUSIONS In this paper, we present the TCNN model in the field of sentiment classification to improve the process of feature extraction. Due to the constraint of partial untying of filter weights that forces the network to apply the same filter at each k step, we introduced the HTCNN model. The new architecture combines the benefits of word embeddings clustering with the idea of tiling and if the tile size is chosen properly, the model could achieve competitive performance with the TCNN and better results than the reference CNN model. Setting the tile size or the number of word clusters could be difficult. Because the HTCNN works like a simultaneous CNN structure, too many parallel convolutional layers could lead to both higher feature sensitivity and higher complexity. CNN does not implicitly fit the fine-grained aspect based sentiment analysis, but using the right network configuration we can adjust it to be more sensitive to the corpus’ features and to increase the overall performance. Even if the results of our experiments are modest compared with the state-of-art sentiment analysis models for IMDB and SemEval-2017 datasets, we prove that the HTCNN is a good substitute for CNN in the NLP field and the replacement of the CNN with the HTCNN in more laborious architectures could lead to higher rates of performance than CNN. As our future work, it would be interesting to see how word networks based on word embeddings improve the clustering solution incorporated in the HTCNN, perhaps in an approach similar to the one proposed in BIBREF34. Moreover, a promising direction would be to have meaningful clusters that denote specific functions (e.g. aspect) or other roles (e.g. syntactic). This would give a boost to the transparency of deep models in natural language processing, since the CNN filters (and consequently, the weights) would be directly interpretable.
Unanswerable
013a8525dbf7a9e1e69acc1cff18bb7b8261cbad
013a8525dbf7a9e1e69acc1cff18bb7b8261cbad_0
Q: Do they use pretrained word embeddings? Text: Introduction When we read news text with emerging entities, text in unfamiliar domains, or text in foreign languages, we often encounter expressions (words or phrases) whose senses we are unsure of. In such cases, we may first try to examine other usages of the same expression in the text, in order to infer its meaning from this context. Failing to do so, we may consult a dictionary, and in the case of polysemous words, choose an appropriate meaning based on the context. Acquiring novel word senses via dictionary definitions is known to be more effective than contextual guessing BIBREF3 , BIBREF4 . However, very often, hand-crafted dictionaries do not contain definitions for rare or novel phrases/words, and we eventually give up on understanding them completely, leaving us with only a shallow reading of the text. There are several natural language processing (nlp) tasks that can roughly address this problem of unfamiliar word senses, all of which are incomplete in some way. Word sense disambiguation (wsd) can basically only handle words (or senses) that are registered in a dictionary a priori. Paraphrasing can suggest other ways of describing a word while keeping its meaning, but those paraphrases are generally context-insensitive and may not be sufficient for understanding. To address this problem, BIBREF2 has proposed a task of describing a phrase in a given context. However, they follow the strict assumption that the target phrase is unknown and there is only a single local context available for the phrase, which makes the task of generating an accurate and coherent definition difficult (perhaps as difficult as a human comprehending the phrase itself). On the other hand, BIBREF0 attempted to generate a definition of a word from its word embedding induced from massive text, followed by BIBREF1 that refers to a local context to define a polysemous word with a local context by choosing relevant dimensions of their embeddings. Although these research efforts revealed that both local and global contexts of words are useful in generating their definitions, none of these studies exploited both local and global contexts directly. In this study, we tackle a task of describing (defining) a phrase when given its local context as BIBREF2 , while allowing access to other usage examples via word embeddings trained from massive text (global contexts) BIBREF0 , BIBREF1 . We present LOG-Cad, a neural network-based description generator (Figure FIGREF1 ) to directly solve this task. Given a word with its context, our generator takes advantage of the target word's embedding, pre-trained from massive text (global contexts), while also encoding the given local context, combining both to generate a natural language description. The local and global contexts complement one another and are both essential; global contexts are crucial when local contexts are short and vague, while the local context is crucial when the target phrase is polysemous, rare, or unseen. Considering various contexts where we need definitions of phrases, we evaluated our method with four datasets including WordNet BIBREF0 for general words, the Oxford dictionary BIBREF1 for polysemous words, Urban Dictionary BIBREF2 for rare idioms or slangs, and a newly-created Wikipedia dataset for entities. Experimental results demonstrate the effectiveness of our method against the three baselines stated above BIBREF0 , BIBREF2 , BIBREF1 . Our contributions are as follows: Context-aware Description Generation In what follows, we define our task of describing a phrase or word in a specific context. Given expression INLINEFORM0 with its context INLINEFORM1 , our task is to output a description INLINEFORM2 . Here, INLINEFORM3 can be a single word or a short phrase and is included in INLINEFORM4 . INLINEFORM5 is a definition-like concrete and concise phrase/sentence that describes the expression INLINEFORM6 . For example, given a phrase “sonic boom” with its context “the shock wave may be caused by sonic boom or by explosion,” the task is to generate a description such as “sound created by an object moving fast.” If the given context has been changed to “this is the first official tour to support the band's latest studio effort, 2009's Sonic Boom,” then the appropriate output would be “album by Kiss.” The process of description generation can be modeled with a conditional language model as DISPLAYFORM0 LOG-CaD: Local & Global Context-aware Description Generator We propose LOG-CaD, a neural model that generates the description of a given phrase or word by using its local and global contexts. In the rest of this section, we first describe our idea of utilizing local and global contexts in the description generation task, then present the details of our model. Wikipedia dataset One of our goals is to describe infrequent/rare words and phrases such as proper nouns in a variety of domains depending on their surrounding context. However, among the three existing datasets, WordNet and Oxford dictionary mainly target the descriptions of relatively common words, and thus are non-ideal test beds for this goal. On the other hand, although the Urban Dictionary dataset contains descriptions of rarely-used phrases as well, the domain of its targeted words and phrases is limited to Internet slang. Therefore, in order to confirm that our model can generate the description of rarely-used phrases as well as words, we constructed a new dataset for context-aware phrase description generation from Wikipedia and Wikidata which contain a wide variety of entity descriptions with contexts. Table TABREF14 and Table TABREF15 show the properties and statistics of the new dataset and the three existing datasets, respectively. The overview of the data extraction process is shown in Figure FIGREF18 . Similarly to the WordNet dataset, each entry in the dataset consists of (1) a phrase, (2) its description, and (3) context (a sentence). For preprocessing, we applied Stanford Tokenizer to the descriptions of Wikidata items and the articles in Wikipedia. Next, we removed phrases in parentheses from the Wikipedia articles, since they tend to be paraphrasing in other languages and work as noise. To obtain the contexts of each item in Wikidata, we extracted the sentence which has a link referring to the item through all the first paragraphs of Wikipedia articles and replaced the phrase of the links with a special token [TRG]. Wikidata items with no description or no contexts are ignored. This utilization of links makes it possible to resolve the ambiguity of words and phrases in a sentence without human annotations, which is one of the major advantages of using Wikipedia. Note that we used only links whose anchor texts are identical to the title of the Wikipedia articles, since the users of Wikipedia sometimes link mentions to related articles. Experiments We evaluate our method by applying it to describe words in WordNet BIBREF13 and Oxford Dictionary, phrases in Urban Dictionary and Wikidata. For all of these datasets, a given word or phrase has an inventory of senses with corresponding definitions and usage examples. These definitions are regarded as ground-truth descriptions. Discussion In this section, we present some analyses on how the local and global contexts contribute to the description generation task. First, we discuss how the local context helps the models to describe a phrase. Then, we analyze the impact of global context under the situation where local context is unreliable. How do the models utilize local contexts? Local context helps us (1) disambiguate polysemous words and (2) infer the meanings of unknown expressions. In this section, we will discuss the two roles of local context. Considering that the pre-trained word embeddings are obtained from word-level co-occurrences in a massive text, more information is mixed up into a single vector as the more senses the word has. While BIBREF1 gadetsky2018conditional designed the I-Attention model to filter out unrelated meanings in the global context given local context, they did not discuss the impact the number of senses has on the performance of definition generation. To understand the influence of the ambiguity of phrases to be defined on the generation performance, we did an analysis on our Wikipedia dataset. Figure FIGREF37 (a) shows that the description generation task becomes harder as the phrases to be described become more ambiguous. In particular, when a phrase has an extremely large number of senses, (i.e., #senses INLINEFORM0 ), the Global model drops its performance significantly. This result indicates that the local context is necessary to disambiguate the meanings in global context. As shown in Table TABREF15 , a large proportion of the phrases in our Wikipedia dataset includes unknown words (i.e., only INLINEFORM0 of words have their pre-trained embeddings). This fact indicates that the global context in the dataset is extremely noisy. Then our next question is, how does the lack of information from global context affect the performance of phrase description? Figure FIGREF37 (b) shows the impact of unknown words in the phrases to be described on the performance. As we can see from the result, the advantage of LOG-CaD and Local models over Global and I-Attention models becomes larger as the unknown words increases. This result suggests that we need to fully utilize local contexts especially in practical applications where the phrases to be defined have many unknown words. How do the models utilize global contexts? As discussed earlier, local contexts are important to describe expressions, but how about global contexts? Assuming a situation where we cannot obtain much information from local contexts (e.g., infer the meaning of “boswellia” from a short local context “Here is a boswellia”), global contexts should be essential to understand the meaning. To confirm this hypothesis, we analyzed the impact of the length of local contexts on bleu scores. Figure FIGREF37 (c) shows that when the length of local context is extremely short ( INLINEFORM0 ), the LOG-CaD model becomes much stronger than the Local model. This result indicates that not only local context but also global context help models describe the meanings of phrases. Related Work In this study, we address a task of describing a given phrase/word with its context. In what follows, we explain several tasks that are related to our task. Our task is closely related to word sense disambiguation (wsd) BIBREF7 , which identifies a pre-defined sense for the target word with its context. Although we can use it to solve our task by retrieving the definition sentence for the sense identified by wsd, it requires a substantial amount of training data to handle a different set of meanings of each word, and cannot handle words (or senses) which are not registered in the dictionary. Although some studies have attempted to detect novel senses of words for given contexts BIBREF8 , BIBREF9 , they do not provide definition sentences. Our task avoids these difficulties in wsd by directly generating descriptions for phrases or words with their contexts. It also allows us to flexibly tailor a fine-grained definition for the specific context. Paraphrasing BIBREF14 , BIBREF15 (or text simplification BIBREF16 ) can be used to rephrase words with unknown senses. However, the target of paraphrase acquisition are words (or phrases) with no specified context. Although several studies BIBREF17 , BIBREF18 , BIBREF19 consider sub-sentential (context-sensitive) paraphrases, they do not intend to obtain a definition-like description as a paraphrase of a word. Recently, BIBREF0 noraset2017definition introduced a task of generating a definition sentence of a word from its pre-trained embedding. Since their task does not take local contexts of words as inputs, their method cannot generate an appropriate definition for a polysemous word for a specific context. To cope with this problem, BIBREF1 gadetsky2018conditional have proposed a definition generation method that works with polysemous words in dictionaries. They present a model that utilizes local context to filter out the unrelated meanings from a pre-trained word embedding in a specific context. While their method use local context only for disambiguating the meanings that are mixed up in word embeddings, the information from local contexts cannot be utilized if the pre-trained embeddings are unavailable or unreliable. On the other hand, our method can fully utilize the local context through an attentional mechanism, even if the reliable word embeddings are unavailable. Focusing on non-standard English words (or phrases), BIBREF2 ni2017learning generated their explanations solely from sentences with those words. Their model does not take advantage of global contexts (word embeddings induced from massive text) as was used in BIBREF0 noraset2017definition. Our task of describing phrases with its given context is a generalization of these three tasks BIBREF0 , BIBREF2 , BIBREF1 , and the proposed method naturally utilizes both local and global contexts of a word in question. Conclusions This paper sets up a task of generating a natural language description for a word/phrase with a specific context, aiming to help us acquire unknown word senses when reading text. We approached this task by using a variant of encoder-decoder models that capture the given local context by an encoder and global contexts by the target word's embedding induced from massive text. Experimental results on three existing datasets and one novel dataset built from Wikipedia dataset confirmed that the use of both local and global contexts is the key to generating appropriate context-sensitive description in various situations. We plan to modify our model to use multiple contexts in text to improve the quality of descriptions, considering the “one sense per discourse” hypothesis BIBREF20 .
Yes
5efed109940bf74ed0a9d4a5e97a535502b23d27
5efed109940bf74ed0a9d4a5e97a535502b23d27_0
Q: Do they use skipgram version of word2vec? Text: Introduction Embedding words in a common vector space can enable machine learning algorithms to achieve better performance in natural language processing (NLP) tasks. Word2vec BIBREF0 is a recently proposed family of algorithms for training such vector representations from unstructured text data via shallow neural networks. The geometry of the resulting vectors was shown in BIBREF0 to capture word semantic similarity through the cosine similarity of the corresponding vectors as well as more complex semantic relationships through vector differences, such as vec(“Madrid”) - vec(“Spain”) + vec(“France”) INLINEFORM0 vec(“Paris”). More recently, novel applications of word2vec involving unconventional generalized “words” and training corpuses have been proposed. These powerful ideas from the NLP community have been adapted by researchers from other domains to tasks beyond representation of words, including relational entities BIBREF1 , BIBREF2 , general text-based attributes BIBREF3 , descriptive text of images BIBREF4 , nodes in graph structure of networks BIBREF5 , and queries BIBREF6 , to name a few. While most NLP applications of word2vec do not require training of large vocabularies, many of the above mentioned real-world applications do. For example, the number of unique nodes in a social network BIBREF5 or the number of unique queries in a search engine BIBREF6 can easily reach few hundred million, a scale that is not achievable using existing word2vec implementations. The training of vectors for such large vocabularies presents several challenges. In word2vec, each vocabulary word has two associated INLINEFORM0 -dimensional vectors which must be trained, respectively referred to as input and output vectors, each of which is represented as an array of INLINEFORM1 single precision floating point numbers BIBREF0 . To achieve acceptable training latency, all vectors need to be kept in physical memory during training, and, as a result, word2vec requires INLINEFORM2 bytes of RAM to train a vocabulary INLINEFORM3 . For example, in Section SECREF2 , we discuss the search advertisement use case with 200 million generalized words and INLINEFORM4 which would thus require INLINEFORM5 = 480GB memory which is well beyond the capacity of typical commodity servers today. Another issue with large vocabulary word2vec training is that the training corpuses required for learning meaningful vectors for such large vocabularies, are themselves very large, on the order of 30 to 90 billion generalized words in the mentioned search advertising application, for example, leading to potentially prohibitively long training times. This is problematic for the envisioned applications which require frequent retraining of vectors as additional data containing new “words” becomes available. The best known approach for refreshing vectors is to periodically retrain on a suitably large window comprised of the most recent available data. In particular, we found that tricks like freezing the vectors for previously trained words don't work as well. The training latency is thus directly linked to staleness of the vectors and should be kept as small as feasible without compromising quality. Our main contribution is a novel distributed word2vec training system for commodity shared compute clusters that addresses these challenges. The proposed system: As discussed in Section SECREF4 , to the best of our knowledge, this is the first word2vec training system that is truly scalable in both of these aspects. We have implemented the proposed word2vec training system in Java and Scala, leveraging the open source building blocks Apache Slider BIBREF10 and Apache Spark BIBREF11 running on a Hadoop YARN-scheduled cluster BIBREF12 , BIBREF13 . Our word2vec solution enables the aforementioned applications to efficiently train vectors for unprecedented vocabulary sizes. Since late 2015, it has been incorporated into the Yahoo Gemini Ad Platform (https://gemini.yahoo.com) as a part of the “broad” ad matching pipeline, with regular retraining of vectors based on fresh user search session data. Sponsored search use case Sponsored search is a popular advertising model BIBREF14 used by web search engines, such as Google, Microsoft, and Yahoo, in which advertisers sponsor the top web search results in order to redirect user's attention from organic search results to ads that are highly relevant to the entered query. Most search engines provide a self-service tool in which the advertisers can create their own ads by providing ad creative to be shown to the users, along with a list of bid terms (i.e., queries for which advertisers wish to show their ad). Due to a large number of unique queries it is challenging for advertisers to identify all queries relevant to their product or service. For this reason search engines often provide a service of “broad” matching, which automatically finds additional relevant queries for advertisers to bid on. This is typically implemented by placing queries and ads in a common feature space, such as bag-of-words using tf-idf weighting, and calculating similarity between ads and queries using a feature space metric in order to find good broad match candidates. In an unconventional application of word2vec to historical search logs, one could train query and ad vectors that capture semantic relationships and find relevant broad match candidates in the resulting feature space. The idea of using word2vec to train query representations is not new and has been suggested by several researchers in the past BIBREF15 , BIBREF6 . However, until now, it was not possible to use the algorithm to its fullest extent due to computational limitations of existing word2vec implementations. The sponsored search training corpus consists of billions of user search sessions each comprising generalized “words” corresponding to entire user queries (not the individual words in the queries), clicked hyperlinks, and clicked advertisements, ordered according to the temporal ordering of the corresponding user actions. Figure FIGREF1 shows a snippet from such a training corpus wherein the clicked ads and search link clicks are encoded as string IDs prefixed by “adid_” and “slc_”, respectively. The queries are highlighted in bold. The goal is to train vector representations for queries, hyperlinks, and advertisements, and to use the semantic similarity captured by these vectors to target advertisements to semantically relevant queries that might otherwise not be found to be relevant using more conventional measures, such as prior clicks or the number of constituent words common to the query and advertisement meta data (i.e., title, description, bid keywords). Note that although the search result hyperlinks clicked by the user are not needed for the sponsored search system, they are nevertheless important to include during training as they help propagate relevance between the queries and ads of interest. Given trained query and ad vectors, finding relevant queries for a given ad amounts to calculating cosine similarity between the ad vector and all query vectors. The INLINEFORM0 queries with the highest similarity are retrieved as broad matches. As illustrated in Figure FIGREF5 for representative search session data, the fraction of query occurrences in the search sessions for which vectors are available, and hence for which potential ads can be found using this vector-based approach, increases at a steady pace with the number of queries in the vocabulary, even with as many as 120 million queries, each occurring at least 5 times. This observation suggests that this application can benefit greatly from vocabularies of 200 million or more generalized words. Moreover, we found that there are around 800 million generalized words that occur 5 or more times in our largest data sets, indicating that additional scaling far beyond 200 million is well worth pursuing. The results of BIBREF6 were based on training the largest vocabulary that could fit into the large memory of a special purpose server, which resulted in learned vector representations for about 45 million words. The proposed training system herein enables increasing this by several fold, resulting in far greater coverage of queries and a potentially significant boost in query monetization, as indicated by Figure FIGREF5 . The word2vec training problem In this paper we focus on the skipgram approach with random negative examples proposed in BIBREF0 . This has been found to yield the best results among the proposed variants on a variety of semantic tests of the resulting vectors BIBREF7 , BIBREF0 . Given a corpus consisting of a sequence of sentences INLINEFORM0 each comprising a sequence of words INLINEFORM1 , the objective is to maximize the log likelihood: DISPLAYFORM0 over input and output word row vectors INLINEFORM0 and INLINEFORM1 with INLINEFORM2 ranging over the words in the vocabulary INLINEFORM3 , where: We follow BIBREF0 for setting INLINEFORM0 and select words occurring in the corpus a sufficient number of times (e.g., at least 5 times), or, if this results in too many words, as the most frequently occurring INLINEFORM1 words, where INLINEFORM2 is the largest number words that can be handled by available computational resources. We further also assume a randomized version of ( EQREF6 ) according to the subsampling technique of BIBREF0 , which removes some occurrences of frequent words. The algorithm for maximizing ( EQREF6 ) advocated in BIBREF0 , and implemented in its open–source counterpart, is a minibatch stochastic gradient descent (SGD). Our training system is also based on minibatch SGD optimization of ( EQREF6 ), however, as described in Section SECREF5 , it is carried out in a distributed fashion in a manner quite different from the implementation of BIBREF0 . Any form of minibatch SGD optimization of ( EQREF6 ) involves the computation of dot products and linear combinations between input and output word vectors for all pairs of words occurring within the same window (with indices in INLINEFORM0 ). This is a massive computational task when carried out for multiple iterations over data sets with tens of billions of words, as encountered in applications described in the previous section. Single machine Several existing word2vec training systems are limited to running on a single machine, though with multiple parallel threads of execution operating on different segments of training data. These include the original open source implementation of word2vec BIBREF0 , as well as those of Medallia BIBREF16 , and Rehurek BIBREF17 . As mentioned in the introduction, these systems would require far larger memory configurations than available on typical commodity-scale servers. Distributed data-parallel A similar drawback applies to distributed data-parallel training systems like those available in Apache Spark MLLib BIBREF18 and Deeplearning4j BIBREF19 . In the former, in each iteration the Spark driver sends the latest vectors to all Spark executors. Each executor modifies its local copy of vectors based on its partition of the training data set, and the driver then combines local vector modifications to update the global vectors. It requires all vectors to be stored in the memory of all Spark executors, and, similarly to its single machine counterparts, is thus unsuitable for large vocabularies. The Deeplearning4j system takes a similar approach and thus suffers from the same limitations, although it does enable the use of GPUs to accelerate the training on each machine. Parameter servers A well-known distributed architecture for training very large machine learning models centers around the use of a parameter server to store the latest values of model parameters through the course of training. A parameter server is a high performance, distributed, in-memory key-value store specialized to the machine learning training application. It typically needs to support only fixed-size values corresponding to the model parameters, and also may support additive updates of values in addition to the usual key-value gets and puts. A parameter server-based training system also includes a number of worker/learner/client nodes that actually carry out the bulk of the training computations. The client nodes read in and parse training data in chunks or minibatches, fetch the model parameters that can be updated based on each minibatch, compute the updates (e.g., via gradient descent with respect to a minibatch restriction of the objective), and transmit the changes in parameter values to the parameter server shards which either overwrite or incrementally update these values in their respective in-memory stores. As observed and partially theoretically justified in BIBREF20 (see also BIBREF21 ), in many applications involving sparse training data characterized by low average overlap between the model parameters associated with different minibatches, the model parameter updates arriving in parallel from multiple client nodes can be aggregated on the parameter server shards without locking, synchronization, or atomicity guarantees, and still result in a far better model accuracy versus training time latency trade-off than single threaded (i.e., sequential) training. The parameter server paradigm has been applied successfully to the training of very large models for logistic regression, deep learning, and factorization machines, and to sampling from the posterior topic distribution in large-scale Latent Dirichlet Allocation BIBREF22 , BIBREF23 , BIBREF21 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF9 , BIBREF27 , BIBREF28 . There have also been some attempts to extend the parameter-server approach to word2vec (e.g., BIBREF29 ). These have followed the above computational flow, with each parameter server shard storing the input and output vectors for a subset of the vocabulary. Multiple client nodes process minibatches of the training corpus, determining for each word in each minibatch the associated context words and random negative examples, issuing get requests to the parameter server shards for the corresponding vectors, computing the gradients with respect to each vector component, and issuing put or increment requests to update the corresponding vectors in the parameter server shards. Unfortunately, such a conventional parameter server-based word2vec training system requires too much network bandwidth to achieve acceptable training throughput. Using the skipgram training algorithm and denoting algorithm parameters as INLINEFORM0 for vector dimension, INLINEFORM1 for number of words per minibatch, INLINEFORM2 for average context size, and INLINEFORM3 for the number of random negative examples per context word, assuming negligible repetition of words within the minibatch and among the negative examples, and further assuming that vectors and their gradients are communicated and stored as arrays of single-precision floating point numbers at 4 bytes each, the amount of word vector data transferred for each get and put call from and to the parameter server, respectively, is on average INLINEFORM4 , or about DISPLAYFORM0 bytes per trained minibatch word. The formula arises from the fact that the input and output vectors for each term in the minibatch must be sent (this the '2' in the first factor in ( EQREF15 )), as must the output vectors for each random negative example. There are on average INLINEFORM0 of these per minibatch word. For INLINEFORM0 , values within the ranges recommended in BIBREF0 , this works out to INLINEFORM1 bytes transferred per word with each get and put. For 10 iterations of training on a data set of roughly 50 billion words, which is in the middle of the relevant range for the sponsored search application described in Section SECREF2 , attaining a total training latency of one week using the above system would require an aggregate bandwidth of at least 1300Gbits/sec to and from the parameter servers. This is impractically large for a single application on a commodity-hardware shared compute cluster. Moreover, one week training latency is already at the boundary of usefulness for our applications. In the next section, we present a different distributed system architecture for word2vec that requires significantly less network bandwidth for a given training throughput than the above conventional parameter server-based system, while continuing to accommodate large vocabularies and providing sufficient computational power to achieve the higher throughput allowed by the reduction in network bandwidth. Architecture Our distributed word2vec training system (i.e., for maximizing ( EQREF6 )) is illustrated in Figure FIGREF18 , with pseudo code for the overall computational flow in Figures SECREF8 , SECREF8 , and SECREF8 in the Appendix. As can be seen in Figure FIGREF18 , the proposed system also features parameter-server-like components (denoted by “PS shards” in the figure), however they are utilized very differently and have very different capabilities from their counterparts in the conventional approach described above. We shall, however, continue to refer to these components as parameter server shards. The system features the following innovations, explained in more detail below, with respect to the conventional approach. Column-wise partitioning of word vectors among parameter server (PS) shards (as opposed to word-wise partitioning). No transmission of word vectors or vector gradients across the network. Server-side computation of vector dot products and vector linear combinations, distributed by column partitions. Distributed server-side generation of random negative examples via broadcasting of common random number generator seeds. In particular, avoiding the transmission of vectors and gradients greatly reduces network bandwidth requirements relative to the conventional approach. We are not aware of any existing systems for training word2vec or its close relatives, matrix factorization and collaborative filtering (i.e., those systems cited in the previous section), that distribute vectors and compute in the manner of the proposed system. In our system, a number of parameter server shards each stores a designated portion of every input (row) vector INLINEFORM0 INLINEFORM1 INLINEFORM2 and output (row) vector INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 (dependence of components on INLINEFORM7 is suppressed). For example, assuming a vector dimension INLINEFORM8 , 10 parameter server shards, and equi-partitioned vectors, shard INLINEFORM9 would store the 30 components of INLINEFORM10 and INLINEFORM11 with indices INLINEFORM12 in the range INLINEFORM13 . We shall denote shard INLINEFORM14 stored portion of INLINEFORM15 and INLINEFORM16 as INLINEFORM17 and INLINEFORM18 , respectively. We refer to this as a 'column-wise' partitioning of the vectors, or more specifically, of the matrix whose rows correspond to the word vectors, as in INLINEFORM19 where INLINEFORM0 are the words in the vocabulary according to a fixed ordering INLINEFORM1 (e.g., by decreasing frequency of occurrence in the corpus). In the sequel, we shall equate each word INLINEFORM2 with INLINEFORM3 , its index in this ordering, so that INLINEFORM4 , and so on. For INLINEFORM5 shards, the vocabulary size can thus be scaled up by as much as a factor of INLINEFORM6 relative to a single machine. The vectors are initialized in the parameter server shards as in BIBREF0 . Multiple clients running on cluster nodes then read in different portions of the corpus and interact with the parameter server shards to carry out minibatch stochastic gradient descent (SGD) optimization of ( EQREF6 ) over the word vectors, following the algorithm in Figure SECREF8 (in the appendix). Specifically, the corpus is partitioned into disjoint minibatches with index sets INLINEFORM0 wherein each INLINEFORM1 is a subset of (sentence index, word index) pairs. For each INLINEFORM2 the word vectors are adjusted based on the gradient of the summation ( EQREF6 ) restricted to the input words belonging to INLINEFORM3 , as given by DISPLAYFORM0 The gradient of INLINEFORM0 with respect to the word vector components is 0 for all word vector components whose corresponding words do not appear as inputs, outputs, or negative examples in ( EQREF25 ). For the remaining components, the gradient is conveniently expressed in groups of components corresponding to specific word vectors. For example, consider a pair of indices INLINEFORM1 belonging to INLINEFORM2 . The gradient components corresponding to the word vector INLINEFORM3 can be expressed as DISPLAYFORM0 We see that evaluation of INLINEFORM0 requires computing the dot (or inner) products INLINEFORM1 appearing in the arguments to INLINEFORM2 and then computing linear combinations of the vectors INLINEFORM3 and INLINEFORM4 , with weights depending on the dot products. A similar expression and computation applies to the other gradient components corresponding to other word vectors appearing in INLINEFORM5 . The vector INLINEFORM6 (and, correspondingly, the other vectors as well) are updated according to the usual SGD update rule DISPLAYFORM0 where INLINEFORM0 is a (suitably small) learning rate. Once a client has assembled the indices (indexing according to the order INLINEFORM0 above) of positive output examples and input words corresponding to a minibatch INLINEFORM1 , it interacts with the parameter server shards to compute ( EQREF26 ) and ( EQREF27 ) using two remote procedure calls (RPCs), dotprod and adjust, which are broadcasted to all PS shards, along with an intervening computation to aggregate results from the dotprod RPC returned by each shard. The RPC calls are detailed in Figures SECREF8 and SECREF8 (in the Appendix), and, at a higher level, entail the following server/shard side operations: dotprod: Select negative examples INLINEFORM0 in ( EQREF26 ) according to a probability distribution derived from the vocabulary histogram proposed in BIBREF0 , but with the client thread supplied seed initializing the random number generation, and then return all partial dot products required to evaluate the gradient ( EQREF26 ) for all positive output, negative output, and input word vectors associated with the minibatch, wherein the partial dot products involve those vector components stored on the designated shard: INLINEFORM1 . adjust: Regenerate negative examples used in preceding dotprod call using the same seed that is again supplied by the client thread. Compute ( EQREF27 ) for vector components associated with the minibatch stored on the shard as a partial vector (restricted to components stored on shard) linear combination using weights received from the client. Between these two RPCs the client computes the linear combination weights needed for adjust by summing the partial inner products returned by the shards in response to the dotprod calls and evaluating the sigmoid function at values given by the aggregated dot products. These weights are then passed to the adjust RPC, along with the seeds for regenerating the identical random negative example indices INLINEFORM0 that were generated during the dotprod RPC. The retransmission simplifies the server in that state need not be maintained between corresponding dotprod and adjust calls. Note that the same seeds are sent to all shards in both calls so that each shard generates the same set of negative example indices. The shards are multithreaded and each thread handles the stream of RPC's coming from all client threads running on a single node. In a typical at scale run of the algorithm, the above process is carried out by multiple client threads running on each of a few hundred nodes, all interacting with the PS shards in parallel. The data set is iterated over multiple times and after each iteration, the learning rate INLINEFORM0 is reduced in a manner similar to the open source implementation of BIBREF0 . Note that there is no locking or synchronization of the word vector state within or across shards or across client threads during any part of the computation. The only synchronization in effect is that the RPC broadcast ensures that all shards operate on the same set of word vector indices for computing their portion of the corresponding calls. Additionally, the client threads independently wait for all responses to their corresponding dotprod calls before proceeding. The lack of synchronization introduces many approximations into the overall SGD computation, similar in spirit to the HOGWILD BIBREF20 and Downpour SGD BIBREF21 distributed optimization schemes. For example, here, in the worst case, the state of the vectors associated with a minibatch could change between the dotprod and adjust calls issued by a single client thread. Nevertheless, despite such approximations, our distributed algorithm incurs surprisingly little degradation in the quality of the trained vectors as compared to single machine solutions (in cases where the computation can be carried out on one machine), as shown in Section SECREF7 . Two details of our version of the algorithm and implementation are helpful for improving convergence/performance on some data sets. One is that in the adjust computation (Figure SECREF8 ) the word vectors belonging to the minibatch are not updated until the end of the call so that references to word vectors throughout the call are to their values at the start of the call. The second is an option for interleaved minibatch formation, which can be used to ensure that indices INLINEFORM0 of input words belonging to a minibatch are sufficiently separated in the training corpus, and ideally, belong to different sentences. This allows input word vectors within a sentence (which are linked through their overlapping output word windows) to “learn” from each other during a single training iteration, as their respective minibatches are processed. Network bandwidth analysis Using the same notation as in ( EQREF15 ), and letting INLINEFORM0 denote the number of shards, the average bytes transferred from all PS shards for each dotprod call is upper bounded by DISPLAYFORM0 That is, each shard transfers the partial dot product results between the input vector of each minibatch word and all context words (there are no more than an average of INLINEFORM0 of these per minibatch word) and negative examples (there are no more than INLINEFORM1 per context per minibatch word, or INLINEFORM2 per minibatch word). It is not hard to see that this is precisely the number of bytes transferred to all PS shards for the vector linear combination component of each adjust call. That is, there are two linear vector updates for each pair of vectors for which a dot product was computed, and these updates involve the same linear combination weight. Normalizing ( EQREF31 ) by the minibatch size, we have the following counterpart of ( EQREF15 ) for the bytes transferred, in each direction, per trained minibatch word, for the proposed scheme: DISPLAYFORM0 Notice that the vector dimension INLINEFORM0 has been replaced by the number of shards INLINEFORM1 . The ratio of the network bandwidths of the proposed system and a conventional parameter server based system is INLINEFORM0 For typical parameters of interest (we typically have INLINEFORM0 between 10 and 20, increasing with INLINEFORM1 between 300 and 1000), this is in the range of INLINEFORM2 to INLINEFORM3 , effectively eliminating network bandwidth as a bottleneck for training latency, relative to the conventional approach. Implementation on Hadoop We have implemented the system described in Section SECREF5 in Java and Scala on a Hadoop YARN scheduled cluster, leveraging Slider BIBREF10 and Spark BIBREF11 . Our end-to-end implementation of training carries out four steps: vocabulary generation, data set preprocessing, training, and vector export. We next review the details of each of these steps. Throughout, all data, including the initial training data, its preprocessed version, the exported vectors are all stored in the Hadoop Distributed File System (HDFS). We remark that although our compute environment is currently based on Hadoop and Spark, other distributed computational frameworks such as the recently released TensorFlow could also serve as a platform for implementing the proposed system. Main steps This step entails counting occurrences of all words in the training corpus and sorting them in order of decreasing occurrence. As mentioned, the vocabulary is taken to be the INLINEFORM0 most frequently occurring words, that occur at least some number INLINEFORM1 times. It is implemented in Spark as a straight-forward map-reduce job. In this step, each word in the training corpus is replaced by its index in the sorted vocabulary generated in the preceding phase (the ordering INLINEFORM0 referred to in Section SECREF5 ). This is also implemented in Spark using a low overhead in-memory key-value store to store the mapping from vocabulary words to their indices. Our implementation hashes words to 64 bit keys to simplify the key-value store. Referring to the system description in Section SECREF5 (and Figure FIGREF18 ), the parameter server portion is implemented in Java, with the RPC layer based on the Netty client-server library BIBREF30 . The RPC layer of the client is implemented similarly. The higher layers of the client (i/o, minibatch formation, partial dot product aggregation, linear combination weight computation) are implemented in Scala and Spark. In particular, the clients are created and connect to the PS shards from within an RDD mapPartitions method applied to the preprocessed data set that is converted to an RDD via the standard Spark file-to-RDD api. At the start of training, the PS shards are launched from a gateway node onto Hadoop cluster nodes using the Apache Slider application that has been designed to launch arbitrary applications onto a Hadoop YARN scheduled cluster. The IP addresses and ports of the respective PS shards are extracted and passed to the Spark executors (which in turn use them to connect respective clients to the PS shards) as a file via the standard spark-submit command line executed on a gateway node. Each mapPartitions operation in the clients is multi-threaded with a configurable number of threads handling the processing of the input data and the interaction with the PS shards. These threads share the same connections with the PS shards. The PS shards are also multi-threaded based on Netty, wherein a configurable number of worker threads process incoming dotprod and adjust requests from multiple connections in parallel. Each shard has a connection to each Spark executor. The word vector portions are stored in each PS shard in arrays of primitive floats, and as mentioned, their indices in the arrays coincide with the indices of their corresponding words in the vocabulary. In the steady state, the PS allocates no new data structures to avoid garbage collection. Objects are created only during start-up, and possibly during the fairly infrequent connection setups, as managed by the Netty RPC layer. In this final step, carried out after training has completed, the partial vectors stored in each PS shard are aggregated and joined with their respective words in the vocabulary and stored together as a text file in HDFS. Again, we leverage Spark to carry out this operation in a distributed fashion, by creating an RDD from the vocabulary and using mapPartitions to launch clients that get the partial vectors from the PS shards for the respective partition of vocabulary words, combine the partial vectors and save corresponding word and vectors pairs to HDFS. Training step throughput To give an idea of the kind of training throughput we can achieve with this system, the following is one configuration we have used for training the sponsored search application on our Hadoop cluster: Algorithm parameters: 200 million word vocabulary, 5 negative examples, maximum of 10 window size Training system parameters: 200 Spark executors, 8 threads per spark executor, minibatch size of 200 yields the following training throughputs in minibatch input words per second (see Section SECREF3 for the definition of input word), for varying PS shards and vector dimensions: For this data set and algorithm parameters, each input word has associated with it an average of about 20 positive context words and negative examples, so that the system is effectively updating about 21 times the third column in the table number of vectors per second. For the first line of the table, for example, this is over 33 million 300 dimensional vector updates per second. The conventional parameter server approach would require a total bandwidth of about 300 Gbps (30 server shards would be needed for this) to and from the parameter server for similar training throughput. This is close to 10 percent of the fabric bandwidth in our production data center. The proposed system requires only about 15 Gbps, making it far more practical for deployment to production in a shared data center, especially in light of the training latency for which this bandwidth must be sustained, which is about two days for data sets of interest. Even more extreme is the last line of the table (the 1000 dim. case), for which the equivalent throughput conventional system would require 800 Gbps vs. 20 Gbps for the proposed system. One important property of the training system is that its throughput at any given time is limited by the throughput of the slowest PS shard at that time. With this in mind, we use the YARN scheduler resource reservation capability exported through Slider to minimize resource contention on all of the machines to which the PS shards are assigned, thereby achieving higher sustained throughput. Another important property of the training system is that increasing the number of shards beyond some point is not helpful since the vector portions handled by each shard become so small that the random access memory transaction bandwidth (number of random cache lines per second) becomes the bottle neck. This explains the limited throughput scaling with PS shards for the 300 dimensional case above. Further optimization of the vector-store of each PS shard with respect to caching and non-uniform memory access might be beneficial. We leave this for future investigation. Evaluation & Deployment In this section, we provide evidence that the vectors trained by the proposed distributed system are of high quality, even with fairly aggressive parallelism during training. We also show bucket test results on live web search traffic that compare query-ad matching performance of our large-vocabulary model to the one trained using single-machine implementation, which led to the decision to deploy the proposed system in production in late 2015. Benchmark data set To compare the proposed distributed system we trained vectors on a publicly available data set collected and processed by the script 'demo-train-big-model-v1-compute-only.sh' from the open-source package of BIBREF0 . This script collects a variety of publicly available text corpuses and processes them using the algorithm described in BIBREF0 to coalesce sufficiently co-occurring words into phrases. We then randomly shuffled the order of sentences (delimited by new line) in the data set, retaining order of words within each sentence. The resulting data set has about 8 billion words and yields a vocabulary of about 7 million words and phrases (based on a cut off of 5 occurrences in the data set). We evaluated accuracy on the phrase analogies in the 'question-phrases.txt' file and also evaluated Spearman's rank correlation with respect to the editorial evaluation of semantic relatedness of pairs of words in the well known wordsim-353 collection BIBREF31 . The results are shown in Table TABREF34 . The first column shows results for the single machine implementation of BIBREF0 , the second for a 'low parallelism' configuration of our system using 50 Spark executors, minibatch size of 1, and 1 thread per executor, and the third column for a 'high parallelism' configuration again with 50 executors, but with minibatch size increased to 50 and 8 threads per executor. The various systems were run using the skipgram variant with 500 dimensional vectors, maximum window size of 20 (10 in each direction), 5 negative examples, subsample ratio of 1e-6 (see BIBREF0 ), initial learning rate of 0.01875, and 3 iterations over the data set. It can be seen that the vectors trained by the 'high parallelism' configuration of the proposed system, which is the closest to the configurations required for acceptable training latency in the large-scale sponsored search application, suffers only a modest loss in quality as measured by these tests. Note that this data set is more challenging for our system than the sponsored search data set, as it is less sparse and there is on average more overlap between words in different minibatches. In fact, if we attempt to increase the parallelism to 200 executors as was used for the training of the vectors described in the next subsection, training fails to converge altogether. We are unsure why our system yields better results than the implementation of BIBREF0 on the wordsim test, yet worse scores on the analogies test. We also note that the analogies test scores reported here involve computing the closest vector for each analogy “question” over the entire vocabulary and not just over the 1M most frequent words, as in the script 'demo-train-big-model-v1-compute-only.sh' of BIBREF0 . Sponsored Search data set We conducted qualitative evaluation in the context of sponsored search application described in Section SECREF2 . Figure FIGREF47 shows the queries whose trained vectors were found to be most similar (out of 133M queries) to an example ad vector, along with the respective cosine similarities to the ad vector. The figure shows the ten most and least similar among the 800 most similar queries, where we note that the ten least similar queries can still be considered to be fairly semantically similar. This particular set of vectors was trained for a vocabulary of 200M generalized words using the 300 dimensional vector, 15 PS shard settings described in Section SECREF41 . We found the vector quality demonstrated in Figure FIGREF47 to be the norm based on inspections of similar matchings of query vectors to a number of ad vectors. We also compared the cosine similarities for pairs of vectors trained using the proposed distributed system and for corresponding vector pairs trained using the open–source implementation of BIBREF0 , again on a large search session data set. The former was trained using a vocabulary of 200 million generalized words while the latter was trained using about 90 million words which is the most that could fit onto a specialized large memory machine. For a set of 7,560 generalized word pairs with words common to the vocabularies trained by the respective systems we found very good agreement in cosine similarities between the corresponding vectors from the two systems, with over 50% of word pairs having cosine similarity differences less than 0.06, and 91% of word pairs having differences less than 0.1. Online A/B tests Following successful offline evaluation of the proposed distributed system, in the following set of experiments we conducted tests on live web search traffic. We ran two bucket tests, each on INLINEFORM0 of search traffic, where we compared query-ad matches produced by training query and ad vectors using search session data set spanning 9 months of search data. One model was trained using implementation from BIBREF0 and the other was trained using the proposed distributed system. Both buckets were compared against control bucket, which employed a collection of different broad match techniques used in production at the time of the test. Each of the online tests were run for 10 days, one after another, more than a month apart. The results of the tests were reported in terms of query coverage (portion of queries for which ads were shown), Auction Depth (number of ads per query that made it into an auction) click-through rate (CTR, or number of ad clicks divided by number of ad impressions), click yield (number of clicks), and revenue. Instead of the actual numbers we show relative improvement over control metrics. Both methods produced a separate query-ad match dictionary by finding INLINEFORM0 nearest ads in the embedding space for each search query from our vocabulary, and keeping only ads with cosine similarity above INLINEFORM1 . The threshold was chosen based on editorial results. To implement the bucket test the query-ad match dictionary is produced offline and cached in the ad server memory such that ads can be retrieved in real-time given an input query. Post retrieval, a click model is used to estimate the clickability of the ad for that query and the ad is sent into an auction, where it competes with ads retrieved by other broad match algorithms. It gets to be shown to the user in case it wins one of the ad slots on the page. The first A/B test was conducted to evaluate the value of query-ad dictionary produced by single-machine implementation. This implementation could scale up to a model with 50M query vectors. It was compared against control bucket that ran a production broad match module. Following positive A/B test metrics, with improvements in coverage and revenue, presented in the first row of Table TABREF48 , the dictionary was launched to production and incorporated into the existing broad match production model. The second A/B test was conducted to evaluate incremental improvement over the single machine solution, which was already launched in production. The model contained vectors for 133M queries. As it can be observed in the second row of Table TABREF48 , the distributed solution provided additional 2.44% query coverage and additional 9.39% revenue, without degrading user experience (CTR remained neutral). This strong monetization potential of our distributed system for training large vocabularies of query and ad vectors led to its deployment in our sponsored search platform. The model is being retrained on a weekly basis, automated via Apache Oozie BIBREF32 , and is currently serving more than INLINEFORM0 of all broad matches. Conclusion In this paper, we presented a novel scalable word2vec training system that, unlike available systems, can train semantically accurate vectors for hundreds of millions of vocabulary words with training latency and network bandwidth usage suitable for regular training on commodity clusters. We motivated the usefulness of large vocabulary word2vec training with a sponsored search application involving generalized “words” corresponding to queries, ads, and hyperlinks, for which the proposed system has been deployed to production. The results on both benchmark data sets and online A/B tests strongly indicate the benefits of the proposed approach. [ht] INLINEFORM0 .dotprod( INLINEFORM1 , INLINEFORM2 , long INLINEFORM3 ) INLINEFORM0 Random Number Generator initialized with INLINEFORM1 INLINEFORM2 ; INLINEFORM3 iterate over words in minibatch INLINEFORM4 INLINEFORM5 iterate over words in context INLINEFORM6 INLINEFORM7 INLINEFORM8 ; INLINEFORM9 generate INLINEFORM10 random negative examples for current output word INLINEFORM11 Array( INLINEFORM12 negative word indices INLINEFORM13 , generated using INLINEFORM14 ) compute partial dot products for positive and negative examples INLINEFORM15 INLINEFORM16 INLINEFORM17 send results back to client INLINEFORM18 Server side computation - dotprod. [ht] void INLINEFORM19 .adjust( INLINEFORM20 , INLINEFORM21 , INLINEFORM22 , INLINEFORM23 , INLINEFORM24 ) INLINEFORM0 Random Number Generator initialized with INLINEFORM1 INLINEFORM2 ; INLINEFORM3 ; INLINEFORM4 ; INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 ; INLINEFORM11 regenerate random negative examples INLINEFORM12 Array( INLINEFORM13 negative word indices INLINEFORM14 , generated using INLINEFORM15 ) compute partial gradient updates and store in scratch area INLINEFORM16 ; INLINEFORM17 INLINEFORM18 INLINEFORM19 ; INLINEFORM20 add partial gradient updates to partial vectors in store all INLINEFORM21 INLINEFORM22 ; INLINEFORM23 Server side computation - adjust. [ht] InputinputOutputoutput INLINEFORM0 : Vectors for vocabulary words INLINEFORM1 = # of parameter servers needed for INLINEFORM2 words Launch parameter servers INLINEFORM3 Initialize vectors in PS server iteration INLINEFORM4 INLINEFORM5 UnprocessedPartitions INLINEFORM6 INLINEFORM7 each executor, in parallel UnprocessedPartitions is non-empty INLINEFORM8 INLINEFORM9 next partition in UnprocessedPartitions Launch client INLINEFORM10 connected to INLINEFORM11 INLINEFORM12 INLINEFORM13 minibatches in INLINEFORM14 INLINEFORM15 = randomly select a seed INLINEFORM16 INLINEFORM17 Array of word indices in INLINEFORM18 INLINEFORM19 INLINEFORM20 Array of Arrays of context word indices of words in INLINEFORM21 client broadcasts word indices to shards which compute partial dot products in parallel, returning results to client INLINEFORM22 INLINEFORM23 , in parallel INLINEFORM24 = INLINEFORM25 .dotprod( INLINEFORM26 , INLINEFORM27 , INLINEFORM28 ) aggregate partial dot products and compute linear coefficients for gradient update INLINEFORM29 INLINEFORM30 ; INLINEFORM31 client broadcasts coefficients to shards which compute partial vector linear combinations INLINEFORM32 INLINEFORM33 , in parallel INLINEFORM34 .adjust( INLINEFORM35 , INLINEFORM36 , INLINEFORM37 , INLINEFORM38 , INLINEFORM39 ) input vectors INLINEFORM40 } from INLINEFORM41 Grid based word2vec algorithm.
Yes
b8137eb0fa0b41f871c899a54154f640f0e9aca1
b8137eb0fa0b41f871c899a54154f640f0e9aca1_0
Q: What domains are considered that have such large vocabularies? Text: Introduction Embedding words in a common vector space can enable machine learning algorithms to achieve better performance in natural language processing (NLP) tasks. Word2vec BIBREF0 is a recently proposed family of algorithms for training such vector representations from unstructured text data via shallow neural networks. The geometry of the resulting vectors was shown in BIBREF0 to capture word semantic similarity through the cosine similarity of the corresponding vectors as well as more complex semantic relationships through vector differences, such as vec(“Madrid”) - vec(“Spain”) + vec(“France”) INLINEFORM0 vec(“Paris”). More recently, novel applications of word2vec involving unconventional generalized “words” and training corpuses have been proposed. These powerful ideas from the NLP community have been adapted by researchers from other domains to tasks beyond representation of words, including relational entities BIBREF1 , BIBREF2 , general text-based attributes BIBREF3 , descriptive text of images BIBREF4 , nodes in graph structure of networks BIBREF5 , and queries BIBREF6 , to name a few. While most NLP applications of word2vec do not require training of large vocabularies, many of the above mentioned real-world applications do. For example, the number of unique nodes in a social network BIBREF5 or the number of unique queries in a search engine BIBREF6 can easily reach few hundred million, a scale that is not achievable using existing word2vec implementations. The training of vectors for such large vocabularies presents several challenges. In word2vec, each vocabulary word has two associated INLINEFORM0 -dimensional vectors which must be trained, respectively referred to as input and output vectors, each of which is represented as an array of INLINEFORM1 single precision floating point numbers BIBREF0 . To achieve acceptable training latency, all vectors need to be kept in physical memory during training, and, as a result, word2vec requires INLINEFORM2 bytes of RAM to train a vocabulary INLINEFORM3 . For example, in Section SECREF2 , we discuss the search advertisement use case with 200 million generalized words and INLINEFORM4 which would thus require INLINEFORM5 = 480GB memory which is well beyond the capacity of typical commodity servers today. Another issue with large vocabulary word2vec training is that the training corpuses required for learning meaningful vectors for such large vocabularies, are themselves very large, on the order of 30 to 90 billion generalized words in the mentioned search advertising application, for example, leading to potentially prohibitively long training times. This is problematic for the envisioned applications which require frequent retraining of vectors as additional data containing new “words” becomes available. The best known approach for refreshing vectors is to periodically retrain on a suitably large window comprised of the most recent available data. In particular, we found that tricks like freezing the vectors for previously trained words don't work as well. The training latency is thus directly linked to staleness of the vectors and should be kept as small as feasible without compromising quality. Our main contribution is a novel distributed word2vec training system for commodity shared compute clusters that addresses these challenges. The proposed system: As discussed in Section SECREF4 , to the best of our knowledge, this is the first word2vec training system that is truly scalable in both of these aspects. We have implemented the proposed word2vec training system in Java and Scala, leveraging the open source building blocks Apache Slider BIBREF10 and Apache Spark BIBREF11 running on a Hadoop YARN-scheduled cluster BIBREF12 , BIBREF13 . Our word2vec solution enables the aforementioned applications to efficiently train vectors for unprecedented vocabulary sizes. Since late 2015, it has been incorporated into the Yahoo Gemini Ad Platform (https://gemini.yahoo.com) as a part of the “broad” ad matching pipeline, with regular retraining of vectors based on fresh user search session data. Sponsored search use case Sponsored search is a popular advertising model BIBREF14 used by web search engines, such as Google, Microsoft, and Yahoo, in which advertisers sponsor the top web search results in order to redirect user's attention from organic search results to ads that are highly relevant to the entered query. Most search engines provide a self-service tool in which the advertisers can create their own ads by providing ad creative to be shown to the users, along with a list of bid terms (i.e., queries for which advertisers wish to show their ad). Due to a large number of unique queries it is challenging for advertisers to identify all queries relevant to their product or service. For this reason search engines often provide a service of “broad” matching, which automatically finds additional relevant queries for advertisers to bid on. This is typically implemented by placing queries and ads in a common feature space, such as bag-of-words using tf-idf weighting, and calculating similarity between ads and queries using a feature space metric in order to find good broad match candidates. In an unconventional application of word2vec to historical search logs, one could train query and ad vectors that capture semantic relationships and find relevant broad match candidates in the resulting feature space. The idea of using word2vec to train query representations is not new and has been suggested by several researchers in the past BIBREF15 , BIBREF6 . However, until now, it was not possible to use the algorithm to its fullest extent due to computational limitations of existing word2vec implementations. The sponsored search training corpus consists of billions of user search sessions each comprising generalized “words” corresponding to entire user queries (not the individual words in the queries), clicked hyperlinks, and clicked advertisements, ordered according to the temporal ordering of the corresponding user actions. Figure FIGREF1 shows a snippet from such a training corpus wherein the clicked ads and search link clicks are encoded as string IDs prefixed by “adid_” and “slc_”, respectively. The queries are highlighted in bold. The goal is to train vector representations for queries, hyperlinks, and advertisements, and to use the semantic similarity captured by these vectors to target advertisements to semantically relevant queries that might otherwise not be found to be relevant using more conventional measures, such as prior clicks or the number of constituent words common to the query and advertisement meta data (i.e., title, description, bid keywords). Note that although the search result hyperlinks clicked by the user are not needed for the sponsored search system, they are nevertheless important to include during training as they help propagate relevance between the queries and ads of interest. Given trained query and ad vectors, finding relevant queries for a given ad amounts to calculating cosine similarity between the ad vector and all query vectors. The INLINEFORM0 queries with the highest similarity are retrieved as broad matches. As illustrated in Figure FIGREF5 for representative search session data, the fraction of query occurrences in the search sessions for which vectors are available, and hence for which potential ads can be found using this vector-based approach, increases at a steady pace with the number of queries in the vocabulary, even with as many as 120 million queries, each occurring at least 5 times. This observation suggests that this application can benefit greatly from vocabularies of 200 million or more generalized words. Moreover, we found that there are around 800 million generalized words that occur 5 or more times in our largest data sets, indicating that additional scaling far beyond 200 million is well worth pursuing. The results of BIBREF6 were based on training the largest vocabulary that could fit into the large memory of a special purpose server, which resulted in learned vector representations for about 45 million words. The proposed training system herein enables increasing this by several fold, resulting in far greater coverage of queries and a potentially significant boost in query monetization, as indicated by Figure FIGREF5 . The word2vec training problem In this paper we focus on the skipgram approach with random negative examples proposed in BIBREF0 . This has been found to yield the best results among the proposed variants on a variety of semantic tests of the resulting vectors BIBREF7 , BIBREF0 . Given a corpus consisting of a sequence of sentences INLINEFORM0 each comprising a sequence of words INLINEFORM1 , the objective is to maximize the log likelihood: DISPLAYFORM0 over input and output word row vectors INLINEFORM0 and INLINEFORM1 with INLINEFORM2 ranging over the words in the vocabulary INLINEFORM3 , where: We follow BIBREF0 for setting INLINEFORM0 and select words occurring in the corpus a sufficient number of times (e.g., at least 5 times), or, if this results in too many words, as the most frequently occurring INLINEFORM1 words, where INLINEFORM2 is the largest number words that can be handled by available computational resources. We further also assume a randomized version of ( EQREF6 ) according to the subsampling technique of BIBREF0 , which removes some occurrences of frequent words. The algorithm for maximizing ( EQREF6 ) advocated in BIBREF0 , and implemented in its open–source counterpart, is a minibatch stochastic gradient descent (SGD). Our training system is also based on minibatch SGD optimization of ( EQREF6 ), however, as described in Section SECREF5 , it is carried out in a distributed fashion in a manner quite different from the implementation of BIBREF0 . Any form of minibatch SGD optimization of ( EQREF6 ) involves the computation of dot products and linear combinations between input and output word vectors for all pairs of words occurring within the same window (with indices in INLINEFORM0 ). This is a massive computational task when carried out for multiple iterations over data sets with tens of billions of words, as encountered in applications described in the previous section. Single machine Several existing word2vec training systems are limited to running on a single machine, though with multiple parallel threads of execution operating on different segments of training data. These include the original open source implementation of word2vec BIBREF0 , as well as those of Medallia BIBREF16 , and Rehurek BIBREF17 . As mentioned in the introduction, these systems would require far larger memory configurations than available on typical commodity-scale servers. Distributed data-parallel A similar drawback applies to distributed data-parallel training systems like those available in Apache Spark MLLib BIBREF18 and Deeplearning4j BIBREF19 . In the former, in each iteration the Spark driver sends the latest vectors to all Spark executors. Each executor modifies its local copy of vectors based on its partition of the training data set, and the driver then combines local vector modifications to update the global vectors. It requires all vectors to be stored in the memory of all Spark executors, and, similarly to its single machine counterparts, is thus unsuitable for large vocabularies. The Deeplearning4j system takes a similar approach and thus suffers from the same limitations, although it does enable the use of GPUs to accelerate the training on each machine. Parameter servers A well-known distributed architecture for training very large machine learning models centers around the use of a parameter server to store the latest values of model parameters through the course of training. A parameter server is a high performance, distributed, in-memory key-value store specialized to the machine learning training application. It typically needs to support only fixed-size values corresponding to the model parameters, and also may support additive updates of values in addition to the usual key-value gets and puts. A parameter server-based training system also includes a number of worker/learner/client nodes that actually carry out the bulk of the training computations. The client nodes read in and parse training data in chunks or minibatches, fetch the model parameters that can be updated based on each minibatch, compute the updates (e.g., via gradient descent with respect to a minibatch restriction of the objective), and transmit the changes in parameter values to the parameter server shards which either overwrite or incrementally update these values in their respective in-memory stores. As observed and partially theoretically justified in BIBREF20 (see also BIBREF21 ), in many applications involving sparse training data characterized by low average overlap between the model parameters associated with different minibatches, the model parameter updates arriving in parallel from multiple client nodes can be aggregated on the parameter server shards without locking, synchronization, or atomicity guarantees, and still result in a far better model accuracy versus training time latency trade-off than single threaded (i.e., sequential) training. The parameter server paradigm has been applied successfully to the training of very large models for logistic regression, deep learning, and factorization machines, and to sampling from the posterior topic distribution in large-scale Latent Dirichlet Allocation BIBREF22 , BIBREF23 , BIBREF21 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF9 , BIBREF27 , BIBREF28 . There have also been some attempts to extend the parameter-server approach to word2vec (e.g., BIBREF29 ). These have followed the above computational flow, with each parameter server shard storing the input and output vectors for a subset of the vocabulary. Multiple client nodes process minibatches of the training corpus, determining for each word in each minibatch the associated context words and random negative examples, issuing get requests to the parameter server shards for the corresponding vectors, computing the gradients with respect to each vector component, and issuing put or increment requests to update the corresponding vectors in the parameter server shards. Unfortunately, such a conventional parameter server-based word2vec training system requires too much network bandwidth to achieve acceptable training throughput. Using the skipgram training algorithm and denoting algorithm parameters as INLINEFORM0 for vector dimension, INLINEFORM1 for number of words per minibatch, INLINEFORM2 for average context size, and INLINEFORM3 for the number of random negative examples per context word, assuming negligible repetition of words within the minibatch and among the negative examples, and further assuming that vectors and their gradients are communicated and stored as arrays of single-precision floating point numbers at 4 bytes each, the amount of word vector data transferred for each get and put call from and to the parameter server, respectively, is on average INLINEFORM4 , or about DISPLAYFORM0 bytes per trained minibatch word. The formula arises from the fact that the input and output vectors for each term in the minibatch must be sent (this the '2' in the first factor in ( EQREF15 )), as must the output vectors for each random negative example. There are on average INLINEFORM0 of these per minibatch word. For INLINEFORM0 , values within the ranges recommended in BIBREF0 , this works out to INLINEFORM1 bytes transferred per word with each get and put. For 10 iterations of training on a data set of roughly 50 billion words, which is in the middle of the relevant range for the sponsored search application described in Section SECREF2 , attaining a total training latency of one week using the above system would require an aggregate bandwidth of at least 1300Gbits/sec to and from the parameter servers. This is impractically large for a single application on a commodity-hardware shared compute cluster. Moreover, one week training latency is already at the boundary of usefulness for our applications. In the next section, we present a different distributed system architecture for word2vec that requires significantly less network bandwidth for a given training throughput than the above conventional parameter server-based system, while continuing to accommodate large vocabularies and providing sufficient computational power to achieve the higher throughput allowed by the reduction in network bandwidth. Architecture Our distributed word2vec training system (i.e., for maximizing ( EQREF6 )) is illustrated in Figure FIGREF18 , with pseudo code for the overall computational flow in Figures SECREF8 , SECREF8 , and SECREF8 in the Appendix. As can be seen in Figure FIGREF18 , the proposed system also features parameter-server-like components (denoted by “PS shards” in the figure), however they are utilized very differently and have very different capabilities from their counterparts in the conventional approach described above. We shall, however, continue to refer to these components as parameter server shards. The system features the following innovations, explained in more detail below, with respect to the conventional approach. Column-wise partitioning of word vectors among parameter server (PS) shards (as opposed to word-wise partitioning). No transmission of word vectors or vector gradients across the network. Server-side computation of vector dot products and vector linear combinations, distributed by column partitions. Distributed server-side generation of random negative examples via broadcasting of common random number generator seeds. In particular, avoiding the transmission of vectors and gradients greatly reduces network bandwidth requirements relative to the conventional approach. We are not aware of any existing systems for training word2vec or its close relatives, matrix factorization and collaborative filtering (i.e., those systems cited in the previous section), that distribute vectors and compute in the manner of the proposed system. In our system, a number of parameter server shards each stores a designated portion of every input (row) vector INLINEFORM0 INLINEFORM1 INLINEFORM2 and output (row) vector INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 (dependence of components on INLINEFORM7 is suppressed). For example, assuming a vector dimension INLINEFORM8 , 10 parameter server shards, and equi-partitioned vectors, shard INLINEFORM9 would store the 30 components of INLINEFORM10 and INLINEFORM11 with indices INLINEFORM12 in the range INLINEFORM13 . We shall denote shard INLINEFORM14 stored portion of INLINEFORM15 and INLINEFORM16 as INLINEFORM17 and INLINEFORM18 , respectively. We refer to this as a 'column-wise' partitioning of the vectors, or more specifically, of the matrix whose rows correspond to the word vectors, as in INLINEFORM19 where INLINEFORM0 are the words in the vocabulary according to a fixed ordering INLINEFORM1 (e.g., by decreasing frequency of occurrence in the corpus). In the sequel, we shall equate each word INLINEFORM2 with INLINEFORM3 , its index in this ordering, so that INLINEFORM4 , and so on. For INLINEFORM5 shards, the vocabulary size can thus be scaled up by as much as a factor of INLINEFORM6 relative to a single machine. The vectors are initialized in the parameter server shards as in BIBREF0 . Multiple clients running on cluster nodes then read in different portions of the corpus and interact with the parameter server shards to carry out minibatch stochastic gradient descent (SGD) optimization of ( EQREF6 ) over the word vectors, following the algorithm in Figure SECREF8 (in the appendix). Specifically, the corpus is partitioned into disjoint minibatches with index sets INLINEFORM0 wherein each INLINEFORM1 is a subset of (sentence index, word index) pairs. For each INLINEFORM2 the word vectors are adjusted based on the gradient of the summation ( EQREF6 ) restricted to the input words belonging to INLINEFORM3 , as given by DISPLAYFORM0 The gradient of INLINEFORM0 with respect to the word vector components is 0 for all word vector components whose corresponding words do not appear as inputs, outputs, or negative examples in ( EQREF25 ). For the remaining components, the gradient is conveniently expressed in groups of components corresponding to specific word vectors. For example, consider a pair of indices INLINEFORM1 belonging to INLINEFORM2 . The gradient components corresponding to the word vector INLINEFORM3 can be expressed as DISPLAYFORM0 We see that evaluation of INLINEFORM0 requires computing the dot (or inner) products INLINEFORM1 appearing in the arguments to INLINEFORM2 and then computing linear combinations of the vectors INLINEFORM3 and INLINEFORM4 , with weights depending on the dot products. A similar expression and computation applies to the other gradient components corresponding to other word vectors appearing in INLINEFORM5 . The vector INLINEFORM6 (and, correspondingly, the other vectors as well) are updated according to the usual SGD update rule DISPLAYFORM0 where INLINEFORM0 is a (suitably small) learning rate. Once a client has assembled the indices (indexing according to the order INLINEFORM0 above) of positive output examples and input words corresponding to a minibatch INLINEFORM1 , it interacts with the parameter server shards to compute ( EQREF26 ) and ( EQREF27 ) using two remote procedure calls (RPCs), dotprod and adjust, which are broadcasted to all PS shards, along with an intervening computation to aggregate results from the dotprod RPC returned by each shard. The RPC calls are detailed in Figures SECREF8 and SECREF8 (in the Appendix), and, at a higher level, entail the following server/shard side operations: dotprod: Select negative examples INLINEFORM0 in ( EQREF26 ) according to a probability distribution derived from the vocabulary histogram proposed in BIBREF0 , but with the client thread supplied seed initializing the random number generation, and then return all partial dot products required to evaluate the gradient ( EQREF26 ) for all positive output, negative output, and input word vectors associated with the minibatch, wherein the partial dot products involve those vector components stored on the designated shard: INLINEFORM1 . adjust: Regenerate negative examples used in preceding dotprod call using the same seed that is again supplied by the client thread. Compute ( EQREF27 ) for vector components associated with the minibatch stored on the shard as a partial vector (restricted to components stored on shard) linear combination using weights received from the client. Between these two RPCs the client computes the linear combination weights needed for adjust by summing the partial inner products returned by the shards in response to the dotprod calls and evaluating the sigmoid function at values given by the aggregated dot products. These weights are then passed to the adjust RPC, along with the seeds for regenerating the identical random negative example indices INLINEFORM0 that were generated during the dotprod RPC. The retransmission simplifies the server in that state need not be maintained between corresponding dotprod and adjust calls. Note that the same seeds are sent to all shards in both calls so that each shard generates the same set of negative example indices. The shards are multithreaded and each thread handles the stream of RPC's coming from all client threads running on a single node. In a typical at scale run of the algorithm, the above process is carried out by multiple client threads running on each of a few hundred nodes, all interacting with the PS shards in parallel. The data set is iterated over multiple times and after each iteration, the learning rate INLINEFORM0 is reduced in a manner similar to the open source implementation of BIBREF0 . Note that there is no locking or synchronization of the word vector state within or across shards or across client threads during any part of the computation. The only synchronization in effect is that the RPC broadcast ensures that all shards operate on the same set of word vector indices for computing their portion of the corresponding calls. Additionally, the client threads independently wait for all responses to their corresponding dotprod calls before proceeding. The lack of synchronization introduces many approximations into the overall SGD computation, similar in spirit to the HOGWILD BIBREF20 and Downpour SGD BIBREF21 distributed optimization schemes. For example, here, in the worst case, the state of the vectors associated with a minibatch could change between the dotprod and adjust calls issued by a single client thread. Nevertheless, despite such approximations, our distributed algorithm incurs surprisingly little degradation in the quality of the trained vectors as compared to single machine solutions (in cases where the computation can be carried out on one machine), as shown in Section SECREF7 . Two details of our version of the algorithm and implementation are helpful for improving convergence/performance on some data sets. One is that in the adjust computation (Figure SECREF8 ) the word vectors belonging to the minibatch are not updated until the end of the call so that references to word vectors throughout the call are to their values at the start of the call. The second is an option for interleaved minibatch formation, which can be used to ensure that indices INLINEFORM0 of input words belonging to a minibatch are sufficiently separated in the training corpus, and ideally, belong to different sentences. This allows input word vectors within a sentence (which are linked through their overlapping output word windows) to “learn” from each other during a single training iteration, as their respective minibatches are processed. Network bandwidth analysis Using the same notation as in ( EQREF15 ), and letting INLINEFORM0 denote the number of shards, the average bytes transferred from all PS shards for each dotprod call is upper bounded by DISPLAYFORM0 That is, each shard transfers the partial dot product results between the input vector of each minibatch word and all context words (there are no more than an average of INLINEFORM0 of these per minibatch word) and negative examples (there are no more than INLINEFORM1 per context per minibatch word, or INLINEFORM2 per minibatch word). It is not hard to see that this is precisely the number of bytes transferred to all PS shards for the vector linear combination component of each adjust call. That is, there are two linear vector updates for each pair of vectors for which a dot product was computed, and these updates involve the same linear combination weight. Normalizing ( EQREF31 ) by the minibatch size, we have the following counterpart of ( EQREF15 ) for the bytes transferred, in each direction, per trained minibatch word, for the proposed scheme: DISPLAYFORM0 Notice that the vector dimension INLINEFORM0 has been replaced by the number of shards INLINEFORM1 . The ratio of the network bandwidths of the proposed system and a conventional parameter server based system is INLINEFORM0 For typical parameters of interest (we typically have INLINEFORM0 between 10 and 20, increasing with INLINEFORM1 between 300 and 1000), this is in the range of INLINEFORM2 to INLINEFORM3 , effectively eliminating network bandwidth as a bottleneck for training latency, relative to the conventional approach. Implementation on Hadoop We have implemented the system described in Section SECREF5 in Java and Scala on a Hadoop YARN scheduled cluster, leveraging Slider BIBREF10 and Spark BIBREF11 . Our end-to-end implementation of training carries out four steps: vocabulary generation, data set preprocessing, training, and vector export. We next review the details of each of these steps. Throughout, all data, including the initial training data, its preprocessed version, the exported vectors are all stored in the Hadoop Distributed File System (HDFS). We remark that although our compute environment is currently based on Hadoop and Spark, other distributed computational frameworks such as the recently released TensorFlow could also serve as a platform for implementing the proposed system. Main steps This step entails counting occurrences of all words in the training corpus and sorting them in order of decreasing occurrence. As mentioned, the vocabulary is taken to be the INLINEFORM0 most frequently occurring words, that occur at least some number INLINEFORM1 times. It is implemented in Spark as a straight-forward map-reduce job. In this step, each word in the training corpus is replaced by its index in the sorted vocabulary generated in the preceding phase (the ordering INLINEFORM0 referred to in Section SECREF5 ). This is also implemented in Spark using a low overhead in-memory key-value store to store the mapping from vocabulary words to their indices. Our implementation hashes words to 64 bit keys to simplify the key-value store. Referring to the system description in Section SECREF5 (and Figure FIGREF18 ), the parameter server portion is implemented in Java, with the RPC layer based on the Netty client-server library BIBREF30 . The RPC layer of the client is implemented similarly. The higher layers of the client (i/o, minibatch formation, partial dot product aggregation, linear combination weight computation) are implemented in Scala and Spark. In particular, the clients are created and connect to the PS shards from within an RDD mapPartitions method applied to the preprocessed data set that is converted to an RDD via the standard Spark file-to-RDD api. At the start of training, the PS shards are launched from a gateway node onto Hadoop cluster nodes using the Apache Slider application that has been designed to launch arbitrary applications onto a Hadoop YARN scheduled cluster. The IP addresses and ports of the respective PS shards are extracted and passed to the Spark executors (which in turn use them to connect respective clients to the PS shards) as a file via the standard spark-submit command line executed on a gateway node. Each mapPartitions operation in the clients is multi-threaded with a configurable number of threads handling the processing of the input data and the interaction with the PS shards. These threads share the same connections with the PS shards. The PS shards are also multi-threaded based on Netty, wherein a configurable number of worker threads process incoming dotprod and adjust requests from multiple connections in parallel. Each shard has a connection to each Spark executor. The word vector portions are stored in each PS shard in arrays of primitive floats, and as mentioned, their indices in the arrays coincide with the indices of their corresponding words in the vocabulary. In the steady state, the PS allocates no new data structures to avoid garbage collection. Objects are created only during start-up, and possibly during the fairly infrequent connection setups, as managed by the Netty RPC layer. In this final step, carried out after training has completed, the partial vectors stored in each PS shard are aggregated and joined with their respective words in the vocabulary and stored together as a text file in HDFS. Again, we leverage Spark to carry out this operation in a distributed fashion, by creating an RDD from the vocabulary and using mapPartitions to launch clients that get the partial vectors from the PS shards for the respective partition of vocabulary words, combine the partial vectors and save corresponding word and vectors pairs to HDFS. Training step throughput To give an idea of the kind of training throughput we can achieve with this system, the following is one configuration we have used for training the sponsored search application on our Hadoop cluster: Algorithm parameters: 200 million word vocabulary, 5 negative examples, maximum of 10 window size Training system parameters: 200 Spark executors, 8 threads per spark executor, minibatch size of 200 yields the following training throughputs in minibatch input words per second (see Section SECREF3 for the definition of input word), for varying PS shards and vector dimensions: For this data set and algorithm parameters, each input word has associated with it an average of about 20 positive context words and negative examples, so that the system is effectively updating about 21 times the third column in the table number of vectors per second. For the first line of the table, for example, this is over 33 million 300 dimensional vector updates per second. The conventional parameter server approach would require a total bandwidth of about 300 Gbps (30 server shards would be needed for this) to and from the parameter server for similar training throughput. This is close to 10 percent of the fabric bandwidth in our production data center. The proposed system requires only about 15 Gbps, making it far more practical for deployment to production in a shared data center, especially in light of the training latency for which this bandwidth must be sustained, which is about two days for data sets of interest. Even more extreme is the last line of the table (the 1000 dim. case), for which the equivalent throughput conventional system would require 800 Gbps vs. 20 Gbps for the proposed system. One important property of the training system is that its throughput at any given time is limited by the throughput of the slowest PS shard at that time. With this in mind, we use the YARN scheduler resource reservation capability exported through Slider to minimize resource contention on all of the machines to which the PS shards are assigned, thereby achieving higher sustained throughput. Another important property of the training system is that increasing the number of shards beyond some point is not helpful since the vector portions handled by each shard become so small that the random access memory transaction bandwidth (number of random cache lines per second) becomes the bottle neck. This explains the limited throughput scaling with PS shards for the 300 dimensional case above. Further optimization of the vector-store of each PS shard with respect to caching and non-uniform memory access might be beneficial. We leave this for future investigation. Evaluation & Deployment In this section, we provide evidence that the vectors trained by the proposed distributed system are of high quality, even with fairly aggressive parallelism during training. We also show bucket test results on live web search traffic that compare query-ad matching performance of our large-vocabulary model to the one trained using single-machine implementation, which led to the decision to deploy the proposed system in production in late 2015. Benchmark data set To compare the proposed distributed system we trained vectors on a publicly available data set collected and processed by the script 'demo-train-big-model-v1-compute-only.sh' from the open-source package of BIBREF0 . This script collects a variety of publicly available text corpuses and processes them using the algorithm described in BIBREF0 to coalesce sufficiently co-occurring words into phrases. We then randomly shuffled the order of sentences (delimited by new line) in the data set, retaining order of words within each sentence. The resulting data set has about 8 billion words and yields a vocabulary of about 7 million words and phrases (based on a cut off of 5 occurrences in the data set). We evaluated accuracy on the phrase analogies in the 'question-phrases.txt' file and also evaluated Spearman's rank correlation with respect to the editorial evaluation of semantic relatedness of pairs of words in the well known wordsim-353 collection BIBREF31 . The results are shown in Table TABREF34 . The first column shows results for the single machine implementation of BIBREF0 , the second for a 'low parallelism' configuration of our system using 50 Spark executors, minibatch size of 1, and 1 thread per executor, and the third column for a 'high parallelism' configuration again with 50 executors, but with minibatch size increased to 50 and 8 threads per executor. The various systems were run using the skipgram variant with 500 dimensional vectors, maximum window size of 20 (10 in each direction), 5 negative examples, subsample ratio of 1e-6 (see BIBREF0 ), initial learning rate of 0.01875, and 3 iterations over the data set. It can be seen that the vectors trained by the 'high parallelism' configuration of the proposed system, which is the closest to the configurations required for acceptable training latency in the large-scale sponsored search application, suffers only a modest loss in quality as measured by these tests. Note that this data set is more challenging for our system than the sponsored search data set, as it is less sparse and there is on average more overlap between words in different minibatches. In fact, if we attempt to increase the parallelism to 200 executors as was used for the training of the vectors described in the next subsection, training fails to converge altogether. We are unsure why our system yields better results than the implementation of BIBREF0 on the wordsim test, yet worse scores on the analogies test. We also note that the analogies test scores reported here involve computing the closest vector for each analogy “question” over the entire vocabulary and not just over the 1M most frequent words, as in the script 'demo-train-big-model-v1-compute-only.sh' of BIBREF0 . Sponsored Search data set We conducted qualitative evaluation in the context of sponsored search application described in Section SECREF2 . Figure FIGREF47 shows the queries whose trained vectors were found to be most similar (out of 133M queries) to an example ad vector, along with the respective cosine similarities to the ad vector. The figure shows the ten most and least similar among the 800 most similar queries, where we note that the ten least similar queries can still be considered to be fairly semantically similar. This particular set of vectors was trained for a vocabulary of 200M generalized words using the 300 dimensional vector, 15 PS shard settings described in Section SECREF41 . We found the vector quality demonstrated in Figure FIGREF47 to be the norm based on inspections of similar matchings of query vectors to a number of ad vectors. We also compared the cosine similarities for pairs of vectors trained using the proposed distributed system and for corresponding vector pairs trained using the open–source implementation of BIBREF0 , again on a large search session data set. The former was trained using a vocabulary of 200 million generalized words while the latter was trained using about 90 million words which is the most that could fit onto a specialized large memory machine. For a set of 7,560 generalized word pairs with words common to the vocabularies trained by the respective systems we found very good agreement in cosine similarities between the corresponding vectors from the two systems, with over 50% of word pairs having cosine similarity differences less than 0.06, and 91% of word pairs having differences less than 0.1. Online A/B tests Following successful offline evaluation of the proposed distributed system, in the following set of experiments we conducted tests on live web search traffic. We ran two bucket tests, each on INLINEFORM0 of search traffic, where we compared query-ad matches produced by training query and ad vectors using search session data set spanning 9 months of search data. One model was trained using implementation from BIBREF0 and the other was trained using the proposed distributed system. Both buckets were compared against control bucket, which employed a collection of different broad match techniques used in production at the time of the test. Each of the online tests were run for 10 days, one after another, more than a month apart. The results of the tests were reported in terms of query coverage (portion of queries for which ads were shown), Auction Depth (number of ads per query that made it into an auction) click-through rate (CTR, or number of ad clicks divided by number of ad impressions), click yield (number of clicks), and revenue. Instead of the actual numbers we show relative improvement over control metrics. Both methods produced a separate query-ad match dictionary by finding INLINEFORM0 nearest ads in the embedding space for each search query from our vocabulary, and keeping only ads with cosine similarity above INLINEFORM1 . The threshold was chosen based on editorial results. To implement the bucket test the query-ad match dictionary is produced offline and cached in the ad server memory such that ads can be retrieved in real-time given an input query. Post retrieval, a click model is used to estimate the clickability of the ad for that query and the ad is sent into an auction, where it competes with ads retrieved by other broad match algorithms. It gets to be shown to the user in case it wins one of the ad slots on the page. The first A/B test was conducted to evaluate the value of query-ad dictionary produced by single-machine implementation. This implementation could scale up to a model with 50M query vectors. It was compared against control bucket that ran a production broad match module. Following positive A/B test metrics, with improvements in coverage and revenue, presented in the first row of Table TABREF48 , the dictionary was launched to production and incorporated into the existing broad match production model. The second A/B test was conducted to evaluate incremental improvement over the single machine solution, which was already launched in production. The model contained vectors for 133M queries. As it can be observed in the second row of Table TABREF48 , the distributed solution provided additional 2.44% query coverage and additional 9.39% revenue, without degrading user experience (CTR remained neutral). This strong monetization potential of our distributed system for training large vocabularies of query and ad vectors led to its deployment in our sponsored search platform. The model is being retrained on a weekly basis, automated via Apache Oozie BIBREF32 , and is currently serving more than INLINEFORM0 of all broad matches. Conclusion In this paper, we presented a novel scalable word2vec training system that, unlike available systems, can train semantically accurate vectors for hundreds of millions of vocabulary words with training latency and network bandwidth usage suitable for regular training on commodity clusters. We motivated the usefulness of large vocabulary word2vec training with a sponsored search application involving generalized “words” corresponding to queries, ads, and hyperlinks, for which the proposed system has been deployed to production. The results on both benchmark data sets and online A/B tests strongly indicate the benefits of the proposed approach. [ht] INLINEFORM0 .dotprod( INLINEFORM1 , INLINEFORM2 , long INLINEFORM3 ) INLINEFORM0 Random Number Generator initialized with INLINEFORM1 INLINEFORM2 ; INLINEFORM3 iterate over words in minibatch INLINEFORM4 INLINEFORM5 iterate over words in context INLINEFORM6 INLINEFORM7 INLINEFORM8 ; INLINEFORM9 generate INLINEFORM10 random negative examples for current output word INLINEFORM11 Array( INLINEFORM12 negative word indices INLINEFORM13 , generated using INLINEFORM14 ) compute partial dot products for positive and negative examples INLINEFORM15 INLINEFORM16 INLINEFORM17 send results back to client INLINEFORM18 Server side computation - dotprod. [ht] void INLINEFORM19 .adjust( INLINEFORM20 , INLINEFORM21 , INLINEFORM22 , INLINEFORM23 , INLINEFORM24 ) INLINEFORM0 Random Number Generator initialized with INLINEFORM1 INLINEFORM2 ; INLINEFORM3 ; INLINEFORM4 ; INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 ; INLINEFORM11 regenerate random negative examples INLINEFORM12 Array( INLINEFORM13 negative word indices INLINEFORM14 , generated using INLINEFORM15 ) compute partial gradient updates and store in scratch area INLINEFORM16 ; INLINEFORM17 INLINEFORM18 INLINEFORM19 ; INLINEFORM20 add partial gradient updates to partial vectors in store all INLINEFORM21 INLINEFORM22 ; INLINEFORM23 Server side computation - adjust. [ht] InputinputOutputoutput INLINEFORM0 : Vectors for vocabulary words INLINEFORM1 = # of parameter servers needed for INLINEFORM2 words Launch parameter servers INLINEFORM3 Initialize vectors in PS server iteration INLINEFORM4 INLINEFORM5 UnprocessedPartitions INLINEFORM6 INLINEFORM7 each executor, in parallel UnprocessedPartitions is non-empty INLINEFORM8 INLINEFORM9 next partition in UnprocessedPartitions Launch client INLINEFORM10 connected to INLINEFORM11 INLINEFORM12 INLINEFORM13 minibatches in INLINEFORM14 INLINEFORM15 = randomly select a seed INLINEFORM16 INLINEFORM17 Array of word indices in INLINEFORM18 INLINEFORM19 INLINEFORM20 Array of Arrays of context word indices of words in INLINEFORM21 client broadcasts word indices to shards which compute partial dot products in parallel, returning results to client INLINEFORM22 INLINEFORM23 , in parallel INLINEFORM24 = INLINEFORM25 .dotprod( INLINEFORM26 , INLINEFORM27 , INLINEFORM28 ) aggregate partial dot products and compute linear coefficients for gradient update INLINEFORM29 INLINEFORM30 ; INLINEFORM31 client broadcasts coefficients to shards which compute partial vector linear combinations INLINEFORM32 INLINEFORM33 , in parallel INLINEFORM34 .adjust( INLINEFORM35 , INLINEFORM36 , INLINEFORM37 , INLINEFORM38 , INLINEFORM39 ) input vectors INLINEFORM40 } from INLINEFORM41 Grid based word2vec algorithm.
relational entities, general text-based attributes, descriptive text of images, nodes in graph structure of networks, queries
38b527783330468bf6c4829f7d998e6f17c615f0
38b527783330468bf6c4829f7d998e6f17c615f0_0
Q: Do they perform any morphological tokenization? Text: Introduction Embedding words in a common vector space can enable machine learning algorithms to achieve better performance in natural language processing (NLP) tasks. Word2vec BIBREF0 is a recently proposed family of algorithms for training such vector representations from unstructured text data via shallow neural networks. The geometry of the resulting vectors was shown in BIBREF0 to capture word semantic similarity through the cosine similarity of the corresponding vectors as well as more complex semantic relationships through vector differences, such as vec(“Madrid”) - vec(“Spain”) + vec(“France”) INLINEFORM0 vec(“Paris”). More recently, novel applications of word2vec involving unconventional generalized “words” and training corpuses have been proposed. These powerful ideas from the NLP community have been adapted by researchers from other domains to tasks beyond representation of words, including relational entities BIBREF1 , BIBREF2 , general text-based attributes BIBREF3 , descriptive text of images BIBREF4 , nodes in graph structure of networks BIBREF5 , and queries BIBREF6 , to name a few. While most NLP applications of word2vec do not require training of large vocabularies, many of the above mentioned real-world applications do. For example, the number of unique nodes in a social network BIBREF5 or the number of unique queries in a search engine BIBREF6 can easily reach few hundred million, a scale that is not achievable using existing word2vec implementations. The training of vectors for such large vocabularies presents several challenges. In word2vec, each vocabulary word has two associated INLINEFORM0 -dimensional vectors which must be trained, respectively referred to as input and output vectors, each of which is represented as an array of INLINEFORM1 single precision floating point numbers BIBREF0 . To achieve acceptable training latency, all vectors need to be kept in physical memory during training, and, as a result, word2vec requires INLINEFORM2 bytes of RAM to train a vocabulary INLINEFORM3 . For example, in Section SECREF2 , we discuss the search advertisement use case with 200 million generalized words and INLINEFORM4 which would thus require INLINEFORM5 = 480GB memory which is well beyond the capacity of typical commodity servers today. Another issue with large vocabulary word2vec training is that the training corpuses required for learning meaningful vectors for such large vocabularies, are themselves very large, on the order of 30 to 90 billion generalized words in the mentioned search advertising application, for example, leading to potentially prohibitively long training times. This is problematic for the envisioned applications which require frequent retraining of vectors as additional data containing new “words” becomes available. The best known approach for refreshing vectors is to periodically retrain on a suitably large window comprised of the most recent available data. In particular, we found that tricks like freezing the vectors for previously trained words don't work as well. The training latency is thus directly linked to staleness of the vectors and should be kept as small as feasible without compromising quality. Our main contribution is a novel distributed word2vec training system for commodity shared compute clusters that addresses these challenges. The proposed system: As discussed in Section SECREF4 , to the best of our knowledge, this is the first word2vec training system that is truly scalable in both of these aspects. We have implemented the proposed word2vec training system in Java and Scala, leveraging the open source building blocks Apache Slider BIBREF10 and Apache Spark BIBREF11 running on a Hadoop YARN-scheduled cluster BIBREF12 , BIBREF13 . Our word2vec solution enables the aforementioned applications to efficiently train vectors for unprecedented vocabulary sizes. Since late 2015, it has been incorporated into the Yahoo Gemini Ad Platform (https://gemini.yahoo.com) as a part of the “broad” ad matching pipeline, with regular retraining of vectors based on fresh user search session data. Sponsored search use case Sponsored search is a popular advertising model BIBREF14 used by web search engines, such as Google, Microsoft, and Yahoo, in which advertisers sponsor the top web search results in order to redirect user's attention from organic search results to ads that are highly relevant to the entered query. Most search engines provide a self-service tool in which the advertisers can create their own ads by providing ad creative to be shown to the users, along with a list of bid terms (i.e., queries for which advertisers wish to show their ad). Due to a large number of unique queries it is challenging for advertisers to identify all queries relevant to their product or service. For this reason search engines often provide a service of “broad” matching, which automatically finds additional relevant queries for advertisers to bid on. This is typically implemented by placing queries and ads in a common feature space, such as bag-of-words using tf-idf weighting, and calculating similarity between ads and queries using a feature space metric in order to find good broad match candidates. In an unconventional application of word2vec to historical search logs, one could train query and ad vectors that capture semantic relationships and find relevant broad match candidates in the resulting feature space. The idea of using word2vec to train query representations is not new and has been suggested by several researchers in the past BIBREF15 , BIBREF6 . However, until now, it was not possible to use the algorithm to its fullest extent due to computational limitations of existing word2vec implementations. The sponsored search training corpus consists of billions of user search sessions each comprising generalized “words” corresponding to entire user queries (not the individual words in the queries), clicked hyperlinks, and clicked advertisements, ordered according to the temporal ordering of the corresponding user actions. Figure FIGREF1 shows a snippet from such a training corpus wherein the clicked ads and search link clicks are encoded as string IDs prefixed by “adid_” and “slc_”, respectively. The queries are highlighted in bold. The goal is to train vector representations for queries, hyperlinks, and advertisements, and to use the semantic similarity captured by these vectors to target advertisements to semantically relevant queries that might otherwise not be found to be relevant using more conventional measures, such as prior clicks or the number of constituent words common to the query and advertisement meta data (i.e., title, description, bid keywords). Note that although the search result hyperlinks clicked by the user are not needed for the sponsored search system, they are nevertheless important to include during training as they help propagate relevance between the queries and ads of interest. Given trained query and ad vectors, finding relevant queries for a given ad amounts to calculating cosine similarity between the ad vector and all query vectors. The INLINEFORM0 queries with the highest similarity are retrieved as broad matches. As illustrated in Figure FIGREF5 for representative search session data, the fraction of query occurrences in the search sessions for which vectors are available, and hence for which potential ads can be found using this vector-based approach, increases at a steady pace with the number of queries in the vocabulary, even with as many as 120 million queries, each occurring at least 5 times. This observation suggests that this application can benefit greatly from vocabularies of 200 million or more generalized words. Moreover, we found that there are around 800 million generalized words that occur 5 or more times in our largest data sets, indicating that additional scaling far beyond 200 million is well worth pursuing. The results of BIBREF6 were based on training the largest vocabulary that could fit into the large memory of a special purpose server, which resulted in learned vector representations for about 45 million words. The proposed training system herein enables increasing this by several fold, resulting in far greater coverage of queries and a potentially significant boost in query monetization, as indicated by Figure FIGREF5 . The word2vec training problem In this paper we focus on the skipgram approach with random negative examples proposed in BIBREF0 . This has been found to yield the best results among the proposed variants on a variety of semantic tests of the resulting vectors BIBREF7 , BIBREF0 . Given a corpus consisting of a sequence of sentences INLINEFORM0 each comprising a sequence of words INLINEFORM1 , the objective is to maximize the log likelihood: DISPLAYFORM0 over input and output word row vectors INLINEFORM0 and INLINEFORM1 with INLINEFORM2 ranging over the words in the vocabulary INLINEFORM3 , where: We follow BIBREF0 for setting INLINEFORM0 and select words occurring in the corpus a sufficient number of times (e.g., at least 5 times), or, if this results in too many words, as the most frequently occurring INLINEFORM1 words, where INLINEFORM2 is the largest number words that can be handled by available computational resources. We further also assume a randomized version of ( EQREF6 ) according to the subsampling technique of BIBREF0 , which removes some occurrences of frequent words. The algorithm for maximizing ( EQREF6 ) advocated in BIBREF0 , and implemented in its open–source counterpart, is a minibatch stochastic gradient descent (SGD). Our training system is also based on minibatch SGD optimization of ( EQREF6 ), however, as described in Section SECREF5 , it is carried out in a distributed fashion in a manner quite different from the implementation of BIBREF0 . Any form of minibatch SGD optimization of ( EQREF6 ) involves the computation of dot products and linear combinations between input and output word vectors for all pairs of words occurring within the same window (with indices in INLINEFORM0 ). This is a massive computational task when carried out for multiple iterations over data sets with tens of billions of words, as encountered in applications described in the previous section. Single machine Several existing word2vec training systems are limited to running on a single machine, though with multiple parallel threads of execution operating on different segments of training data. These include the original open source implementation of word2vec BIBREF0 , as well as those of Medallia BIBREF16 , and Rehurek BIBREF17 . As mentioned in the introduction, these systems would require far larger memory configurations than available on typical commodity-scale servers. Distributed data-parallel A similar drawback applies to distributed data-parallel training systems like those available in Apache Spark MLLib BIBREF18 and Deeplearning4j BIBREF19 . In the former, in each iteration the Spark driver sends the latest vectors to all Spark executors. Each executor modifies its local copy of vectors based on its partition of the training data set, and the driver then combines local vector modifications to update the global vectors. It requires all vectors to be stored in the memory of all Spark executors, and, similarly to its single machine counterparts, is thus unsuitable for large vocabularies. The Deeplearning4j system takes a similar approach and thus suffers from the same limitations, although it does enable the use of GPUs to accelerate the training on each machine. Parameter servers A well-known distributed architecture for training very large machine learning models centers around the use of a parameter server to store the latest values of model parameters through the course of training. A parameter server is a high performance, distributed, in-memory key-value store specialized to the machine learning training application. It typically needs to support only fixed-size values corresponding to the model parameters, and also may support additive updates of values in addition to the usual key-value gets and puts. A parameter server-based training system also includes a number of worker/learner/client nodes that actually carry out the bulk of the training computations. The client nodes read in and parse training data in chunks or minibatches, fetch the model parameters that can be updated based on each minibatch, compute the updates (e.g., via gradient descent with respect to a minibatch restriction of the objective), and transmit the changes in parameter values to the parameter server shards which either overwrite or incrementally update these values in their respective in-memory stores. As observed and partially theoretically justified in BIBREF20 (see also BIBREF21 ), in many applications involving sparse training data characterized by low average overlap between the model parameters associated with different minibatches, the model parameter updates arriving in parallel from multiple client nodes can be aggregated on the parameter server shards without locking, synchronization, or atomicity guarantees, and still result in a far better model accuracy versus training time latency trade-off than single threaded (i.e., sequential) training. The parameter server paradigm has been applied successfully to the training of very large models for logistic regression, deep learning, and factorization machines, and to sampling from the posterior topic distribution in large-scale Latent Dirichlet Allocation BIBREF22 , BIBREF23 , BIBREF21 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF9 , BIBREF27 , BIBREF28 . There have also been some attempts to extend the parameter-server approach to word2vec (e.g., BIBREF29 ). These have followed the above computational flow, with each parameter server shard storing the input and output vectors for a subset of the vocabulary. Multiple client nodes process minibatches of the training corpus, determining for each word in each minibatch the associated context words and random negative examples, issuing get requests to the parameter server shards for the corresponding vectors, computing the gradients with respect to each vector component, and issuing put or increment requests to update the corresponding vectors in the parameter server shards. Unfortunately, such a conventional parameter server-based word2vec training system requires too much network bandwidth to achieve acceptable training throughput. Using the skipgram training algorithm and denoting algorithm parameters as INLINEFORM0 for vector dimension, INLINEFORM1 for number of words per minibatch, INLINEFORM2 for average context size, and INLINEFORM3 for the number of random negative examples per context word, assuming negligible repetition of words within the minibatch and among the negative examples, and further assuming that vectors and their gradients are communicated and stored as arrays of single-precision floating point numbers at 4 bytes each, the amount of word vector data transferred for each get and put call from and to the parameter server, respectively, is on average INLINEFORM4 , or about DISPLAYFORM0 bytes per trained minibatch word. The formula arises from the fact that the input and output vectors for each term in the minibatch must be sent (this the '2' in the first factor in ( EQREF15 )), as must the output vectors for each random negative example. There are on average INLINEFORM0 of these per minibatch word. For INLINEFORM0 , values within the ranges recommended in BIBREF0 , this works out to INLINEFORM1 bytes transferred per word with each get and put. For 10 iterations of training on a data set of roughly 50 billion words, which is in the middle of the relevant range for the sponsored search application described in Section SECREF2 , attaining a total training latency of one week using the above system would require an aggregate bandwidth of at least 1300Gbits/sec to and from the parameter servers. This is impractically large for a single application on a commodity-hardware shared compute cluster. Moreover, one week training latency is already at the boundary of usefulness for our applications. In the next section, we present a different distributed system architecture for word2vec that requires significantly less network bandwidth for a given training throughput than the above conventional parameter server-based system, while continuing to accommodate large vocabularies and providing sufficient computational power to achieve the higher throughput allowed by the reduction in network bandwidth. Architecture Our distributed word2vec training system (i.e., for maximizing ( EQREF6 )) is illustrated in Figure FIGREF18 , with pseudo code for the overall computational flow in Figures SECREF8 , SECREF8 , and SECREF8 in the Appendix. As can be seen in Figure FIGREF18 , the proposed system also features parameter-server-like components (denoted by “PS shards” in the figure), however they are utilized very differently and have very different capabilities from their counterparts in the conventional approach described above. We shall, however, continue to refer to these components as parameter server shards. The system features the following innovations, explained in more detail below, with respect to the conventional approach. Column-wise partitioning of word vectors among parameter server (PS) shards (as opposed to word-wise partitioning). No transmission of word vectors or vector gradients across the network. Server-side computation of vector dot products and vector linear combinations, distributed by column partitions. Distributed server-side generation of random negative examples via broadcasting of common random number generator seeds. In particular, avoiding the transmission of vectors and gradients greatly reduces network bandwidth requirements relative to the conventional approach. We are not aware of any existing systems for training word2vec or its close relatives, matrix factorization and collaborative filtering (i.e., those systems cited in the previous section), that distribute vectors and compute in the manner of the proposed system. In our system, a number of parameter server shards each stores a designated portion of every input (row) vector INLINEFORM0 INLINEFORM1 INLINEFORM2 and output (row) vector INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 (dependence of components on INLINEFORM7 is suppressed). For example, assuming a vector dimension INLINEFORM8 , 10 parameter server shards, and equi-partitioned vectors, shard INLINEFORM9 would store the 30 components of INLINEFORM10 and INLINEFORM11 with indices INLINEFORM12 in the range INLINEFORM13 . We shall denote shard INLINEFORM14 stored portion of INLINEFORM15 and INLINEFORM16 as INLINEFORM17 and INLINEFORM18 , respectively. We refer to this as a 'column-wise' partitioning of the vectors, or more specifically, of the matrix whose rows correspond to the word vectors, as in INLINEFORM19 where INLINEFORM0 are the words in the vocabulary according to a fixed ordering INLINEFORM1 (e.g., by decreasing frequency of occurrence in the corpus). In the sequel, we shall equate each word INLINEFORM2 with INLINEFORM3 , its index in this ordering, so that INLINEFORM4 , and so on. For INLINEFORM5 shards, the vocabulary size can thus be scaled up by as much as a factor of INLINEFORM6 relative to a single machine. The vectors are initialized in the parameter server shards as in BIBREF0 . Multiple clients running on cluster nodes then read in different portions of the corpus and interact with the parameter server shards to carry out minibatch stochastic gradient descent (SGD) optimization of ( EQREF6 ) over the word vectors, following the algorithm in Figure SECREF8 (in the appendix). Specifically, the corpus is partitioned into disjoint minibatches with index sets INLINEFORM0 wherein each INLINEFORM1 is a subset of (sentence index, word index) pairs. For each INLINEFORM2 the word vectors are adjusted based on the gradient of the summation ( EQREF6 ) restricted to the input words belonging to INLINEFORM3 , as given by DISPLAYFORM0 The gradient of INLINEFORM0 with respect to the word vector components is 0 for all word vector components whose corresponding words do not appear as inputs, outputs, or negative examples in ( EQREF25 ). For the remaining components, the gradient is conveniently expressed in groups of components corresponding to specific word vectors. For example, consider a pair of indices INLINEFORM1 belonging to INLINEFORM2 . The gradient components corresponding to the word vector INLINEFORM3 can be expressed as DISPLAYFORM0 We see that evaluation of INLINEFORM0 requires computing the dot (or inner) products INLINEFORM1 appearing in the arguments to INLINEFORM2 and then computing linear combinations of the vectors INLINEFORM3 and INLINEFORM4 , with weights depending on the dot products. A similar expression and computation applies to the other gradient components corresponding to other word vectors appearing in INLINEFORM5 . The vector INLINEFORM6 (and, correspondingly, the other vectors as well) are updated according to the usual SGD update rule DISPLAYFORM0 where INLINEFORM0 is a (suitably small) learning rate. Once a client has assembled the indices (indexing according to the order INLINEFORM0 above) of positive output examples and input words corresponding to a minibatch INLINEFORM1 , it interacts with the parameter server shards to compute ( EQREF26 ) and ( EQREF27 ) using two remote procedure calls (RPCs), dotprod and adjust, which are broadcasted to all PS shards, along with an intervening computation to aggregate results from the dotprod RPC returned by each shard. The RPC calls are detailed in Figures SECREF8 and SECREF8 (in the Appendix), and, at a higher level, entail the following server/shard side operations: dotprod: Select negative examples INLINEFORM0 in ( EQREF26 ) according to a probability distribution derived from the vocabulary histogram proposed in BIBREF0 , but with the client thread supplied seed initializing the random number generation, and then return all partial dot products required to evaluate the gradient ( EQREF26 ) for all positive output, negative output, and input word vectors associated with the minibatch, wherein the partial dot products involve those vector components stored on the designated shard: INLINEFORM1 . adjust: Regenerate negative examples used in preceding dotprod call using the same seed that is again supplied by the client thread. Compute ( EQREF27 ) for vector components associated with the minibatch stored on the shard as a partial vector (restricted to components stored on shard) linear combination using weights received from the client. Between these two RPCs the client computes the linear combination weights needed for adjust by summing the partial inner products returned by the shards in response to the dotprod calls and evaluating the sigmoid function at values given by the aggregated dot products. These weights are then passed to the adjust RPC, along with the seeds for regenerating the identical random negative example indices INLINEFORM0 that were generated during the dotprod RPC. The retransmission simplifies the server in that state need not be maintained between corresponding dotprod and adjust calls. Note that the same seeds are sent to all shards in both calls so that each shard generates the same set of negative example indices. The shards are multithreaded and each thread handles the stream of RPC's coming from all client threads running on a single node. In a typical at scale run of the algorithm, the above process is carried out by multiple client threads running on each of a few hundred nodes, all interacting with the PS shards in parallel. The data set is iterated over multiple times and after each iteration, the learning rate INLINEFORM0 is reduced in a manner similar to the open source implementation of BIBREF0 . Note that there is no locking or synchronization of the word vector state within or across shards or across client threads during any part of the computation. The only synchronization in effect is that the RPC broadcast ensures that all shards operate on the same set of word vector indices for computing their portion of the corresponding calls. Additionally, the client threads independently wait for all responses to their corresponding dotprod calls before proceeding. The lack of synchronization introduces many approximations into the overall SGD computation, similar in spirit to the HOGWILD BIBREF20 and Downpour SGD BIBREF21 distributed optimization schemes. For example, here, in the worst case, the state of the vectors associated with a minibatch could change between the dotprod and adjust calls issued by a single client thread. Nevertheless, despite such approximations, our distributed algorithm incurs surprisingly little degradation in the quality of the trained vectors as compared to single machine solutions (in cases where the computation can be carried out on one machine), as shown in Section SECREF7 . Two details of our version of the algorithm and implementation are helpful for improving convergence/performance on some data sets. One is that in the adjust computation (Figure SECREF8 ) the word vectors belonging to the minibatch are not updated until the end of the call so that references to word vectors throughout the call are to their values at the start of the call. The second is an option for interleaved minibatch formation, which can be used to ensure that indices INLINEFORM0 of input words belonging to a minibatch are sufficiently separated in the training corpus, and ideally, belong to different sentences. This allows input word vectors within a sentence (which are linked through their overlapping output word windows) to “learn” from each other during a single training iteration, as their respective minibatches are processed. Network bandwidth analysis Using the same notation as in ( EQREF15 ), and letting INLINEFORM0 denote the number of shards, the average bytes transferred from all PS shards for each dotprod call is upper bounded by DISPLAYFORM0 That is, each shard transfers the partial dot product results between the input vector of each minibatch word and all context words (there are no more than an average of INLINEFORM0 of these per minibatch word) and negative examples (there are no more than INLINEFORM1 per context per minibatch word, or INLINEFORM2 per minibatch word). It is not hard to see that this is precisely the number of bytes transferred to all PS shards for the vector linear combination component of each adjust call. That is, there are two linear vector updates for each pair of vectors for which a dot product was computed, and these updates involve the same linear combination weight. Normalizing ( EQREF31 ) by the minibatch size, we have the following counterpart of ( EQREF15 ) for the bytes transferred, in each direction, per trained minibatch word, for the proposed scheme: DISPLAYFORM0 Notice that the vector dimension INLINEFORM0 has been replaced by the number of shards INLINEFORM1 . The ratio of the network bandwidths of the proposed system and a conventional parameter server based system is INLINEFORM0 For typical parameters of interest (we typically have INLINEFORM0 between 10 and 20, increasing with INLINEFORM1 between 300 and 1000), this is in the range of INLINEFORM2 to INLINEFORM3 , effectively eliminating network bandwidth as a bottleneck for training latency, relative to the conventional approach. Implementation on Hadoop We have implemented the system described in Section SECREF5 in Java and Scala on a Hadoop YARN scheduled cluster, leveraging Slider BIBREF10 and Spark BIBREF11 . Our end-to-end implementation of training carries out four steps: vocabulary generation, data set preprocessing, training, and vector export. We next review the details of each of these steps. Throughout, all data, including the initial training data, its preprocessed version, the exported vectors are all stored in the Hadoop Distributed File System (HDFS). We remark that although our compute environment is currently based on Hadoop and Spark, other distributed computational frameworks such as the recently released TensorFlow could also serve as a platform for implementing the proposed system. Main steps This step entails counting occurrences of all words in the training corpus and sorting them in order of decreasing occurrence. As mentioned, the vocabulary is taken to be the INLINEFORM0 most frequently occurring words, that occur at least some number INLINEFORM1 times. It is implemented in Spark as a straight-forward map-reduce job. In this step, each word in the training corpus is replaced by its index in the sorted vocabulary generated in the preceding phase (the ordering INLINEFORM0 referred to in Section SECREF5 ). This is also implemented in Spark using a low overhead in-memory key-value store to store the mapping from vocabulary words to their indices. Our implementation hashes words to 64 bit keys to simplify the key-value store. Referring to the system description in Section SECREF5 (and Figure FIGREF18 ), the parameter server portion is implemented in Java, with the RPC layer based on the Netty client-server library BIBREF30 . The RPC layer of the client is implemented similarly. The higher layers of the client (i/o, minibatch formation, partial dot product aggregation, linear combination weight computation) are implemented in Scala and Spark. In particular, the clients are created and connect to the PS shards from within an RDD mapPartitions method applied to the preprocessed data set that is converted to an RDD via the standard Spark file-to-RDD api. At the start of training, the PS shards are launched from a gateway node onto Hadoop cluster nodes using the Apache Slider application that has been designed to launch arbitrary applications onto a Hadoop YARN scheduled cluster. The IP addresses and ports of the respective PS shards are extracted and passed to the Spark executors (which in turn use them to connect respective clients to the PS shards) as a file via the standard spark-submit command line executed on a gateway node. Each mapPartitions operation in the clients is multi-threaded with a configurable number of threads handling the processing of the input data and the interaction with the PS shards. These threads share the same connections with the PS shards. The PS shards are also multi-threaded based on Netty, wherein a configurable number of worker threads process incoming dotprod and adjust requests from multiple connections in parallel. Each shard has a connection to each Spark executor. The word vector portions are stored in each PS shard in arrays of primitive floats, and as mentioned, their indices in the arrays coincide with the indices of their corresponding words in the vocabulary. In the steady state, the PS allocates no new data structures to avoid garbage collection. Objects are created only during start-up, and possibly during the fairly infrequent connection setups, as managed by the Netty RPC layer. In this final step, carried out after training has completed, the partial vectors stored in each PS shard are aggregated and joined with their respective words in the vocabulary and stored together as a text file in HDFS. Again, we leverage Spark to carry out this operation in a distributed fashion, by creating an RDD from the vocabulary and using mapPartitions to launch clients that get the partial vectors from the PS shards for the respective partition of vocabulary words, combine the partial vectors and save corresponding word and vectors pairs to HDFS. Training step throughput To give an idea of the kind of training throughput we can achieve with this system, the following is one configuration we have used for training the sponsored search application on our Hadoop cluster: Algorithm parameters: 200 million word vocabulary, 5 negative examples, maximum of 10 window size Training system parameters: 200 Spark executors, 8 threads per spark executor, minibatch size of 200 yields the following training throughputs in minibatch input words per second (see Section SECREF3 for the definition of input word), for varying PS shards and vector dimensions: For this data set and algorithm parameters, each input word has associated with it an average of about 20 positive context words and negative examples, so that the system is effectively updating about 21 times the third column in the table number of vectors per second. For the first line of the table, for example, this is over 33 million 300 dimensional vector updates per second. The conventional parameter server approach would require a total bandwidth of about 300 Gbps (30 server shards would be needed for this) to and from the parameter server for similar training throughput. This is close to 10 percent of the fabric bandwidth in our production data center. The proposed system requires only about 15 Gbps, making it far more practical for deployment to production in a shared data center, especially in light of the training latency for which this bandwidth must be sustained, which is about two days for data sets of interest. Even more extreme is the last line of the table (the 1000 dim. case), for which the equivalent throughput conventional system would require 800 Gbps vs. 20 Gbps for the proposed system. One important property of the training system is that its throughput at any given time is limited by the throughput of the slowest PS shard at that time. With this in mind, we use the YARN scheduler resource reservation capability exported through Slider to minimize resource contention on all of the machines to which the PS shards are assigned, thereby achieving higher sustained throughput. Another important property of the training system is that increasing the number of shards beyond some point is not helpful since the vector portions handled by each shard become so small that the random access memory transaction bandwidth (number of random cache lines per second) becomes the bottle neck. This explains the limited throughput scaling with PS shards for the 300 dimensional case above. Further optimization of the vector-store of each PS shard with respect to caching and non-uniform memory access might be beneficial. We leave this for future investigation. Evaluation & Deployment In this section, we provide evidence that the vectors trained by the proposed distributed system are of high quality, even with fairly aggressive parallelism during training. We also show bucket test results on live web search traffic that compare query-ad matching performance of our large-vocabulary model to the one trained using single-machine implementation, which led to the decision to deploy the proposed system in production in late 2015. Benchmark data set To compare the proposed distributed system we trained vectors on a publicly available data set collected and processed by the script 'demo-train-big-model-v1-compute-only.sh' from the open-source package of BIBREF0 . This script collects a variety of publicly available text corpuses and processes them using the algorithm described in BIBREF0 to coalesce sufficiently co-occurring words into phrases. We then randomly shuffled the order of sentences (delimited by new line) in the data set, retaining order of words within each sentence. The resulting data set has about 8 billion words and yields a vocabulary of about 7 million words and phrases (based on a cut off of 5 occurrences in the data set). We evaluated accuracy on the phrase analogies in the 'question-phrases.txt' file and also evaluated Spearman's rank correlation with respect to the editorial evaluation of semantic relatedness of pairs of words in the well known wordsim-353 collection BIBREF31 . The results are shown in Table TABREF34 . The first column shows results for the single machine implementation of BIBREF0 , the second for a 'low parallelism' configuration of our system using 50 Spark executors, minibatch size of 1, and 1 thread per executor, and the third column for a 'high parallelism' configuration again with 50 executors, but with minibatch size increased to 50 and 8 threads per executor. The various systems were run using the skipgram variant with 500 dimensional vectors, maximum window size of 20 (10 in each direction), 5 negative examples, subsample ratio of 1e-6 (see BIBREF0 ), initial learning rate of 0.01875, and 3 iterations over the data set. It can be seen that the vectors trained by the 'high parallelism' configuration of the proposed system, which is the closest to the configurations required for acceptable training latency in the large-scale sponsored search application, suffers only a modest loss in quality as measured by these tests. Note that this data set is more challenging for our system than the sponsored search data set, as it is less sparse and there is on average more overlap between words in different minibatches. In fact, if we attempt to increase the parallelism to 200 executors as was used for the training of the vectors described in the next subsection, training fails to converge altogether. We are unsure why our system yields better results than the implementation of BIBREF0 on the wordsim test, yet worse scores on the analogies test. We also note that the analogies test scores reported here involve computing the closest vector for each analogy “question” over the entire vocabulary and not just over the 1M most frequent words, as in the script 'demo-train-big-model-v1-compute-only.sh' of BIBREF0 . Sponsored Search data set We conducted qualitative evaluation in the context of sponsored search application described in Section SECREF2 . Figure FIGREF47 shows the queries whose trained vectors were found to be most similar (out of 133M queries) to an example ad vector, along with the respective cosine similarities to the ad vector. The figure shows the ten most and least similar among the 800 most similar queries, where we note that the ten least similar queries can still be considered to be fairly semantically similar. This particular set of vectors was trained for a vocabulary of 200M generalized words using the 300 dimensional vector, 15 PS shard settings described in Section SECREF41 . We found the vector quality demonstrated in Figure FIGREF47 to be the norm based on inspections of similar matchings of query vectors to a number of ad vectors. We also compared the cosine similarities for pairs of vectors trained using the proposed distributed system and for corresponding vector pairs trained using the open–source implementation of BIBREF0 , again on a large search session data set. The former was trained using a vocabulary of 200 million generalized words while the latter was trained using about 90 million words which is the most that could fit onto a specialized large memory machine. For a set of 7,560 generalized word pairs with words common to the vocabularies trained by the respective systems we found very good agreement in cosine similarities between the corresponding vectors from the two systems, with over 50% of word pairs having cosine similarity differences less than 0.06, and 91% of word pairs having differences less than 0.1. Online A/B tests Following successful offline evaluation of the proposed distributed system, in the following set of experiments we conducted tests on live web search traffic. We ran two bucket tests, each on INLINEFORM0 of search traffic, where we compared query-ad matches produced by training query and ad vectors using search session data set spanning 9 months of search data. One model was trained using implementation from BIBREF0 and the other was trained using the proposed distributed system. Both buckets were compared against control bucket, which employed a collection of different broad match techniques used in production at the time of the test. Each of the online tests were run for 10 days, one after another, more than a month apart. The results of the tests were reported in terms of query coverage (portion of queries for which ads were shown), Auction Depth (number of ads per query that made it into an auction) click-through rate (CTR, or number of ad clicks divided by number of ad impressions), click yield (number of clicks), and revenue. Instead of the actual numbers we show relative improvement over control metrics. Both methods produced a separate query-ad match dictionary by finding INLINEFORM0 nearest ads in the embedding space for each search query from our vocabulary, and keeping only ads with cosine similarity above INLINEFORM1 . The threshold was chosen based on editorial results. To implement the bucket test the query-ad match dictionary is produced offline and cached in the ad server memory such that ads can be retrieved in real-time given an input query. Post retrieval, a click model is used to estimate the clickability of the ad for that query and the ad is sent into an auction, where it competes with ads retrieved by other broad match algorithms. It gets to be shown to the user in case it wins one of the ad slots on the page. The first A/B test was conducted to evaluate the value of query-ad dictionary produced by single-machine implementation. This implementation could scale up to a model with 50M query vectors. It was compared against control bucket that ran a production broad match module. Following positive A/B test metrics, with improvements in coverage and revenue, presented in the first row of Table TABREF48 , the dictionary was launched to production and incorporated into the existing broad match production model. The second A/B test was conducted to evaluate incremental improvement over the single machine solution, which was already launched in production. The model contained vectors for 133M queries. As it can be observed in the second row of Table TABREF48 , the distributed solution provided additional 2.44% query coverage and additional 9.39% revenue, without degrading user experience (CTR remained neutral). This strong monetization potential of our distributed system for training large vocabularies of query and ad vectors led to its deployment in our sponsored search platform. The model is being retrained on a weekly basis, automated via Apache Oozie BIBREF32 , and is currently serving more than INLINEFORM0 of all broad matches. Conclusion In this paper, we presented a novel scalable word2vec training system that, unlike available systems, can train semantically accurate vectors for hundreds of millions of vocabulary words with training latency and network bandwidth usage suitable for regular training on commodity clusters. We motivated the usefulness of large vocabulary word2vec training with a sponsored search application involving generalized “words” corresponding to queries, ads, and hyperlinks, for which the proposed system has been deployed to production. The results on both benchmark data sets and online A/B tests strongly indicate the benefits of the proposed approach. [ht] INLINEFORM0 .dotprod( INLINEFORM1 , INLINEFORM2 , long INLINEFORM3 ) INLINEFORM0 Random Number Generator initialized with INLINEFORM1 INLINEFORM2 ; INLINEFORM3 iterate over words in minibatch INLINEFORM4 INLINEFORM5 iterate over words in context INLINEFORM6 INLINEFORM7 INLINEFORM8 ; INLINEFORM9 generate INLINEFORM10 random negative examples for current output word INLINEFORM11 Array( INLINEFORM12 negative word indices INLINEFORM13 , generated using INLINEFORM14 ) compute partial dot products for positive and negative examples INLINEFORM15 INLINEFORM16 INLINEFORM17 send results back to client INLINEFORM18 Server side computation - dotprod. [ht] void INLINEFORM19 .adjust( INLINEFORM20 , INLINEFORM21 , INLINEFORM22 , INLINEFORM23 , INLINEFORM24 ) INLINEFORM0 Random Number Generator initialized with INLINEFORM1 INLINEFORM2 ; INLINEFORM3 ; INLINEFORM4 ; INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 ; INLINEFORM11 regenerate random negative examples INLINEFORM12 Array( INLINEFORM13 negative word indices INLINEFORM14 , generated using INLINEFORM15 ) compute partial gradient updates and store in scratch area INLINEFORM16 ; INLINEFORM17 INLINEFORM18 INLINEFORM19 ; INLINEFORM20 add partial gradient updates to partial vectors in store all INLINEFORM21 INLINEFORM22 ; INLINEFORM23 Server side computation - adjust. [ht] InputinputOutputoutput INLINEFORM0 : Vectors for vocabulary words INLINEFORM1 = # of parameter servers needed for INLINEFORM2 words Launch parameter servers INLINEFORM3 Initialize vectors in PS server iteration INLINEFORM4 INLINEFORM5 UnprocessedPartitions INLINEFORM6 INLINEFORM7 each executor, in parallel UnprocessedPartitions is non-empty INLINEFORM8 INLINEFORM9 next partition in UnprocessedPartitions Launch client INLINEFORM10 connected to INLINEFORM11 INLINEFORM12 INLINEFORM13 minibatches in INLINEFORM14 INLINEFORM15 = randomly select a seed INLINEFORM16 INLINEFORM17 Array of word indices in INLINEFORM18 INLINEFORM19 INLINEFORM20 Array of Arrays of context word indices of words in INLINEFORM21 client broadcasts word indices to shards which compute partial dot products in parallel, returning results to client INLINEFORM22 INLINEFORM23 , in parallel INLINEFORM24 = INLINEFORM25 .dotprod( INLINEFORM26 , INLINEFORM27 , INLINEFORM28 ) aggregate partial dot products and compute linear coefficients for gradient update INLINEFORM29 INLINEFORM30 ; INLINEFORM31 client broadcasts coefficients to shards which compute partial vector linear combinations INLINEFORM32 INLINEFORM33 , in parallel INLINEFORM34 .adjust( INLINEFORM35 , INLINEFORM36 , INLINEFORM37 , INLINEFORM38 , INLINEFORM39 ) input vectors INLINEFORM40 } from INLINEFORM41 Grid based word2vec algorithm.
No
6b2fbc1c083491a774233f9edf8f76bd879418df
6b2fbc1c083491a774233f9edf8f76bd879418df_0
Q: How many nodes does the cluster have? Text: Introduction Embedding words in a common vector space can enable machine learning algorithms to achieve better performance in natural language processing (NLP) tasks. Word2vec BIBREF0 is a recently proposed family of algorithms for training such vector representations from unstructured text data via shallow neural networks. The geometry of the resulting vectors was shown in BIBREF0 to capture word semantic similarity through the cosine similarity of the corresponding vectors as well as more complex semantic relationships through vector differences, such as vec(“Madrid”) - vec(“Spain”) + vec(“France”) INLINEFORM0 vec(“Paris”). More recently, novel applications of word2vec involving unconventional generalized “words” and training corpuses have been proposed. These powerful ideas from the NLP community have been adapted by researchers from other domains to tasks beyond representation of words, including relational entities BIBREF1 , BIBREF2 , general text-based attributes BIBREF3 , descriptive text of images BIBREF4 , nodes in graph structure of networks BIBREF5 , and queries BIBREF6 , to name a few. While most NLP applications of word2vec do not require training of large vocabularies, many of the above mentioned real-world applications do. For example, the number of unique nodes in a social network BIBREF5 or the number of unique queries in a search engine BIBREF6 can easily reach few hundred million, a scale that is not achievable using existing word2vec implementations. The training of vectors for such large vocabularies presents several challenges. In word2vec, each vocabulary word has two associated INLINEFORM0 -dimensional vectors which must be trained, respectively referred to as input and output vectors, each of which is represented as an array of INLINEFORM1 single precision floating point numbers BIBREF0 . To achieve acceptable training latency, all vectors need to be kept in physical memory during training, and, as a result, word2vec requires INLINEFORM2 bytes of RAM to train a vocabulary INLINEFORM3 . For example, in Section SECREF2 , we discuss the search advertisement use case with 200 million generalized words and INLINEFORM4 which would thus require INLINEFORM5 = 480GB memory which is well beyond the capacity of typical commodity servers today. Another issue with large vocabulary word2vec training is that the training corpuses required for learning meaningful vectors for such large vocabularies, are themselves very large, on the order of 30 to 90 billion generalized words in the mentioned search advertising application, for example, leading to potentially prohibitively long training times. This is problematic for the envisioned applications which require frequent retraining of vectors as additional data containing new “words” becomes available. The best known approach for refreshing vectors is to periodically retrain on a suitably large window comprised of the most recent available data. In particular, we found that tricks like freezing the vectors for previously trained words don't work as well. The training latency is thus directly linked to staleness of the vectors and should be kept as small as feasible without compromising quality. Our main contribution is a novel distributed word2vec training system for commodity shared compute clusters that addresses these challenges. The proposed system: As discussed in Section SECREF4 , to the best of our knowledge, this is the first word2vec training system that is truly scalable in both of these aspects. We have implemented the proposed word2vec training system in Java and Scala, leveraging the open source building blocks Apache Slider BIBREF10 and Apache Spark BIBREF11 running on a Hadoop YARN-scheduled cluster BIBREF12 , BIBREF13 . Our word2vec solution enables the aforementioned applications to efficiently train vectors for unprecedented vocabulary sizes. Since late 2015, it has been incorporated into the Yahoo Gemini Ad Platform (https://gemini.yahoo.com) as a part of the “broad” ad matching pipeline, with regular retraining of vectors based on fresh user search session data. Sponsored search use case Sponsored search is a popular advertising model BIBREF14 used by web search engines, such as Google, Microsoft, and Yahoo, in which advertisers sponsor the top web search results in order to redirect user's attention from organic search results to ads that are highly relevant to the entered query. Most search engines provide a self-service tool in which the advertisers can create their own ads by providing ad creative to be shown to the users, along with a list of bid terms (i.e., queries for which advertisers wish to show their ad). Due to a large number of unique queries it is challenging for advertisers to identify all queries relevant to their product or service. For this reason search engines often provide a service of “broad” matching, which automatically finds additional relevant queries for advertisers to bid on. This is typically implemented by placing queries and ads in a common feature space, such as bag-of-words using tf-idf weighting, and calculating similarity between ads and queries using a feature space metric in order to find good broad match candidates. In an unconventional application of word2vec to historical search logs, one could train query and ad vectors that capture semantic relationships and find relevant broad match candidates in the resulting feature space. The idea of using word2vec to train query representations is not new and has been suggested by several researchers in the past BIBREF15 , BIBREF6 . However, until now, it was not possible to use the algorithm to its fullest extent due to computational limitations of existing word2vec implementations. The sponsored search training corpus consists of billions of user search sessions each comprising generalized “words” corresponding to entire user queries (not the individual words in the queries), clicked hyperlinks, and clicked advertisements, ordered according to the temporal ordering of the corresponding user actions. Figure FIGREF1 shows a snippet from such a training corpus wherein the clicked ads and search link clicks are encoded as string IDs prefixed by “adid_” and “slc_”, respectively. The queries are highlighted in bold. The goal is to train vector representations for queries, hyperlinks, and advertisements, and to use the semantic similarity captured by these vectors to target advertisements to semantically relevant queries that might otherwise not be found to be relevant using more conventional measures, such as prior clicks or the number of constituent words common to the query and advertisement meta data (i.e., title, description, bid keywords). Note that although the search result hyperlinks clicked by the user are not needed for the sponsored search system, they are nevertheless important to include during training as they help propagate relevance between the queries and ads of interest. Given trained query and ad vectors, finding relevant queries for a given ad amounts to calculating cosine similarity between the ad vector and all query vectors. The INLINEFORM0 queries with the highest similarity are retrieved as broad matches. As illustrated in Figure FIGREF5 for representative search session data, the fraction of query occurrences in the search sessions for which vectors are available, and hence for which potential ads can be found using this vector-based approach, increases at a steady pace with the number of queries in the vocabulary, even with as many as 120 million queries, each occurring at least 5 times. This observation suggests that this application can benefit greatly from vocabularies of 200 million or more generalized words. Moreover, we found that there are around 800 million generalized words that occur 5 or more times in our largest data sets, indicating that additional scaling far beyond 200 million is well worth pursuing. The results of BIBREF6 were based on training the largest vocabulary that could fit into the large memory of a special purpose server, which resulted in learned vector representations for about 45 million words. The proposed training system herein enables increasing this by several fold, resulting in far greater coverage of queries and a potentially significant boost in query monetization, as indicated by Figure FIGREF5 . The word2vec training problem In this paper we focus on the skipgram approach with random negative examples proposed in BIBREF0 . This has been found to yield the best results among the proposed variants on a variety of semantic tests of the resulting vectors BIBREF7 , BIBREF0 . Given a corpus consisting of a sequence of sentences INLINEFORM0 each comprising a sequence of words INLINEFORM1 , the objective is to maximize the log likelihood: DISPLAYFORM0 over input and output word row vectors INLINEFORM0 and INLINEFORM1 with INLINEFORM2 ranging over the words in the vocabulary INLINEFORM3 , where: We follow BIBREF0 for setting INLINEFORM0 and select words occurring in the corpus a sufficient number of times (e.g., at least 5 times), or, if this results in too many words, as the most frequently occurring INLINEFORM1 words, where INLINEFORM2 is the largest number words that can be handled by available computational resources. We further also assume a randomized version of ( EQREF6 ) according to the subsampling technique of BIBREF0 , which removes some occurrences of frequent words. The algorithm for maximizing ( EQREF6 ) advocated in BIBREF0 , and implemented in its open–source counterpart, is a minibatch stochastic gradient descent (SGD). Our training system is also based on minibatch SGD optimization of ( EQREF6 ), however, as described in Section SECREF5 , it is carried out in a distributed fashion in a manner quite different from the implementation of BIBREF0 . Any form of minibatch SGD optimization of ( EQREF6 ) involves the computation of dot products and linear combinations between input and output word vectors for all pairs of words occurring within the same window (with indices in INLINEFORM0 ). This is a massive computational task when carried out for multiple iterations over data sets with tens of billions of words, as encountered in applications described in the previous section. Single machine Several existing word2vec training systems are limited to running on a single machine, though with multiple parallel threads of execution operating on different segments of training data. These include the original open source implementation of word2vec BIBREF0 , as well as those of Medallia BIBREF16 , and Rehurek BIBREF17 . As mentioned in the introduction, these systems would require far larger memory configurations than available on typical commodity-scale servers. Distributed data-parallel A similar drawback applies to distributed data-parallel training systems like those available in Apache Spark MLLib BIBREF18 and Deeplearning4j BIBREF19 . In the former, in each iteration the Spark driver sends the latest vectors to all Spark executors. Each executor modifies its local copy of vectors based on its partition of the training data set, and the driver then combines local vector modifications to update the global vectors. It requires all vectors to be stored in the memory of all Spark executors, and, similarly to its single machine counterparts, is thus unsuitable for large vocabularies. The Deeplearning4j system takes a similar approach and thus suffers from the same limitations, although it does enable the use of GPUs to accelerate the training on each machine. Parameter servers A well-known distributed architecture for training very large machine learning models centers around the use of a parameter server to store the latest values of model parameters through the course of training. A parameter server is a high performance, distributed, in-memory key-value store specialized to the machine learning training application. It typically needs to support only fixed-size values corresponding to the model parameters, and also may support additive updates of values in addition to the usual key-value gets and puts. A parameter server-based training system also includes a number of worker/learner/client nodes that actually carry out the bulk of the training computations. The client nodes read in and parse training data in chunks or minibatches, fetch the model parameters that can be updated based on each minibatch, compute the updates (e.g., via gradient descent with respect to a minibatch restriction of the objective), and transmit the changes in parameter values to the parameter server shards which either overwrite or incrementally update these values in their respective in-memory stores. As observed and partially theoretically justified in BIBREF20 (see also BIBREF21 ), in many applications involving sparse training data characterized by low average overlap between the model parameters associated with different minibatches, the model parameter updates arriving in parallel from multiple client nodes can be aggregated on the parameter server shards without locking, synchronization, or atomicity guarantees, and still result in a far better model accuracy versus training time latency trade-off than single threaded (i.e., sequential) training. The parameter server paradigm has been applied successfully to the training of very large models for logistic regression, deep learning, and factorization machines, and to sampling from the posterior topic distribution in large-scale Latent Dirichlet Allocation BIBREF22 , BIBREF23 , BIBREF21 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF9 , BIBREF27 , BIBREF28 . There have also been some attempts to extend the parameter-server approach to word2vec (e.g., BIBREF29 ). These have followed the above computational flow, with each parameter server shard storing the input and output vectors for a subset of the vocabulary. Multiple client nodes process minibatches of the training corpus, determining for each word in each minibatch the associated context words and random negative examples, issuing get requests to the parameter server shards for the corresponding vectors, computing the gradients with respect to each vector component, and issuing put or increment requests to update the corresponding vectors in the parameter server shards. Unfortunately, such a conventional parameter server-based word2vec training system requires too much network bandwidth to achieve acceptable training throughput. Using the skipgram training algorithm and denoting algorithm parameters as INLINEFORM0 for vector dimension, INLINEFORM1 for number of words per minibatch, INLINEFORM2 for average context size, and INLINEFORM3 for the number of random negative examples per context word, assuming negligible repetition of words within the minibatch and among the negative examples, and further assuming that vectors and their gradients are communicated and stored as arrays of single-precision floating point numbers at 4 bytes each, the amount of word vector data transferred for each get and put call from and to the parameter server, respectively, is on average INLINEFORM4 , or about DISPLAYFORM0 bytes per trained minibatch word. The formula arises from the fact that the input and output vectors for each term in the minibatch must be sent (this the '2' in the first factor in ( EQREF15 )), as must the output vectors for each random negative example. There are on average INLINEFORM0 of these per minibatch word. For INLINEFORM0 , values within the ranges recommended in BIBREF0 , this works out to INLINEFORM1 bytes transferred per word with each get and put. For 10 iterations of training on a data set of roughly 50 billion words, which is in the middle of the relevant range for the sponsored search application described in Section SECREF2 , attaining a total training latency of one week using the above system would require an aggregate bandwidth of at least 1300Gbits/sec to and from the parameter servers. This is impractically large for a single application on a commodity-hardware shared compute cluster. Moreover, one week training latency is already at the boundary of usefulness for our applications. In the next section, we present a different distributed system architecture for word2vec that requires significantly less network bandwidth for a given training throughput than the above conventional parameter server-based system, while continuing to accommodate large vocabularies and providing sufficient computational power to achieve the higher throughput allowed by the reduction in network bandwidth. Architecture Our distributed word2vec training system (i.e., for maximizing ( EQREF6 )) is illustrated in Figure FIGREF18 , with pseudo code for the overall computational flow in Figures SECREF8 , SECREF8 , and SECREF8 in the Appendix. As can be seen in Figure FIGREF18 , the proposed system also features parameter-server-like components (denoted by “PS shards” in the figure), however they are utilized very differently and have very different capabilities from their counterparts in the conventional approach described above. We shall, however, continue to refer to these components as parameter server shards. The system features the following innovations, explained in more detail below, with respect to the conventional approach. Column-wise partitioning of word vectors among parameter server (PS) shards (as opposed to word-wise partitioning). No transmission of word vectors or vector gradients across the network. Server-side computation of vector dot products and vector linear combinations, distributed by column partitions. Distributed server-side generation of random negative examples via broadcasting of common random number generator seeds. In particular, avoiding the transmission of vectors and gradients greatly reduces network bandwidth requirements relative to the conventional approach. We are not aware of any existing systems for training word2vec or its close relatives, matrix factorization and collaborative filtering (i.e., those systems cited in the previous section), that distribute vectors and compute in the manner of the proposed system. In our system, a number of parameter server shards each stores a designated portion of every input (row) vector INLINEFORM0 INLINEFORM1 INLINEFORM2 and output (row) vector INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 (dependence of components on INLINEFORM7 is suppressed). For example, assuming a vector dimension INLINEFORM8 , 10 parameter server shards, and equi-partitioned vectors, shard INLINEFORM9 would store the 30 components of INLINEFORM10 and INLINEFORM11 with indices INLINEFORM12 in the range INLINEFORM13 . We shall denote shard INLINEFORM14 stored portion of INLINEFORM15 and INLINEFORM16 as INLINEFORM17 and INLINEFORM18 , respectively. We refer to this as a 'column-wise' partitioning of the vectors, or more specifically, of the matrix whose rows correspond to the word vectors, as in INLINEFORM19 where INLINEFORM0 are the words in the vocabulary according to a fixed ordering INLINEFORM1 (e.g., by decreasing frequency of occurrence in the corpus). In the sequel, we shall equate each word INLINEFORM2 with INLINEFORM3 , its index in this ordering, so that INLINEFORM4 , and so on. For INLINEFORM5 shards, the vocabulary size can thus be scaled up by as much as a factor of INLINEFORM6 relative to a single machine. The vectors are initialized in the parameter server shards as in BIBREF0 . Multiple clients running on cluster nodes then read in different portions of the corpus and interact with the parameter server shards to carry out minibatch stochastic gradient descent (SGD) optimization of ( EQREF6 ) over the word vectors, following the algorithm in Figure SECREF8 (in the appendix). Specifically, the corpus is partitioned into disjoint minibatches with index sets INLINEFORM0 wherein each INLINEFORM1 is a subset of (sentence index, word index) pairs. For each INLINEFORM2 the word vectors are adjusted based on the gradient of the summation ( EQREF6 ) restricted to the input words belonging to INLINEFORM3 , as given by DISPLAYFORM0 The gradient of INLINEFORM0 with respect to the word vector components is 0 for all word vector components whose corresponding words do not appear as inputs, outputs, or negative examples in ( EQREF25 ). For the remaining components, the gradient is conveniently expressed in groups of components corresponding to specific word vectors. For example, consider a pair of indices INLINEFORM1 belonging to INLINEFORM2 . The gradient components corresponding to the word vector INLINEFORM3 can be expressed as DISPLAYFORM0 We see that evaluation of INLINEFORM0 requires computing the dot (or inner) products INLINEFORM1 appearing in the arguments to INLINEFORM2 and then computing linear combinations of the vectors INLINEFORM3 and INLINEFORM4 , with weights depending on the dot products. A similar expression and computation applies to the other gradient components corresponding to other word vectors appearing in INLINEFORM5 . The vector INLINEFORM6 (and, correspondingly, the other vectors as well) are updated according to the usual SGD update rule DISPLAYFORM0 where INLINEFORM0 is a (suitably small) learning rate. Once a client has assembled the indices (indexing according to the order INLINEFORM0 above) of positive output examples and input words corresponding to a minibatch INLINEFORM1 , it interacts with the parameter server shards to compute ( EQREF26 ) and ( EQREF27 ) using two remote procedure calls (RPCs), dotprod and adjust, which are broadcasted to all PS shards, along with an intervening computation to aggregate results from the dotprod RPC returned by each shard. The RPC calls are detailed in Figures SECREF8 and SECREF8 (in the Appendix), and, at a higher level, entail the following server/shard side operations: dotprod: Select negative examples INLINEFORM0 in ( EQREF26 ) according to a probability distribution derived from the vocabulary histogram proposed in BIBREF0 , but with the client thread supplied seed initializing the random number generation, and then return all partial dot products required to evaluate the gradient ( EQREF26 ) for all positive output, negative output, and input word vectors associated with the minibatch, wherein the partial dot products involve those vector components stored on the designated shard: INLINEFORM1 . adjust: Regenerate negative examples used in preceding dotprod call using the same seed that is again supplied by the client thread. Compute ( EQREF27 ) for vector components associated with the minibatch stored on the shard as a partial vector (restricted to components stored on shard) linear combination using weights received from the client. Between these two RPCs the client computes the linear combination weights needed for adjust by summing the partial inner products returned by the shards in response to the dotprod calls and evaluating the sigmoid function at values given by the aggregated dot products. These weights are then passed to the adjust RPC, along with the seeds for regenerating the identical random negative example indices INLINEFORM0 that were generated during the dotprod RPC. The retransmission simplifies the server in that state need not be maintained between corresponding dotprod and adjust calls. Note that the same seeds are sent to all shards in both calls so that each shard generates the same set of negative example indices. The shards are multithreaded and each thread handles the stream of RPC's coming from all client threads running on a single node. In a typical at scale run of the algorithm, the above process is carried out by multiple client threads running on each of a few hundred nodes, all interacting with the PS shards in parallel. The data set is iterated over multiple times and after each iteration, the learning rate INLINEFORM0 is reduced in a manner similar to the open source implementation of BIBREF0 . Note that there is no locking or synchronization of the word vector state within or across shards or across client threads during any part of the computation. The only synchronization in effect is that the RPC broadcast ensures that all shards operate on the same set of word vector indices for computing their portion of the corresponding calls. Additionally, the client threads independently wait for all responses to their corresponding dotprod calls before proceeding. The lack of synchronization introduces many approximations into the overall SGD computation, similar in spirit to the HOGWILD BIBREF20 and Downpour SGD BIBREF21 distributed optimization schemes. For example, here, in the worst case, the state of the vectors associated with a minibatch could change between the dotprod and adjust calls issued by a single client thread. Nevertheless, despite such approximations, our distributed algorithm incurs surprisingly little degradation in the quality of the trained vectors as compared to single machine solutions (in cases where the computation can be carried out on one machine), as shown in Section SECREF7 . Two details of our version of the algorithm and implementation are helpful for improving convergence/performance on some data sets. One is that in the adjust computation (Figure SECREF8 ) the word vectors belonging to the minibatch are not updated until the end of the call so that references to word vectors throughout the call are to their values at the start of the call. The second is an option for interleaved minibatch formation, which can be used to ensure that indices INLINEFORM0 of input words belonging to a minibatch are sufficiently separated in the training corpus, and ideally, belong to different sentences. This allows input word vectors within a sentence (which are linked through their overlapping output word windows) to “learn” from each other during a single training iteration, as their respective minibatches are processed. Network bandwidth analysis Using the same notation as in ( EQREF15 ), and letting INLINEFORM0 denote the number of shards, the average bytes transferred from all PS shards for each dotprod call is upper bounded by DISPLAYFORM0 That is, each shard transfers the partial dot product results between the input vector of each minibatch word and all context words (there are no more than an average of INLINEFORM0 of these per minibatch word) and negative examples (there are no more than INLINEFORM1 per context per minibatch word, or INLINEFORM2 per minibatch word). It is not hard to see that this is precisely the number of bytes transferred to all PS shards for the vector linear combination component of each adjust call. That is, there are two linear vector updates for each pair of vectors for which a dot product was computed, and these updates involve the same linear combination weight. Normalizing ( EQREF31 ) by the minibatch size, we have the following counterpart of ( EQREF15 ) for the bytes transferred, in each direction, per trained minibatch word, for the proposed scheme: DISPLAYFORM0 Notice that the vector dimension INLINEFORM0 has been replaced by the number of shards INLINEFORM1 . The ratio of the network bandwidths of the proposed system and a conventional parameter server based system is INLINEFORM0 For typical parameters of interest (we typically have INLINEFORM0 between 10 and 20, increasing with INLINEFORM1 between 300 and 1000), this is in the range of INLINEFORM2 to INLINEFORM3 , effectively eliminating network bandwidth as a bottleneck for training latency, relative to the conventional approach. Implementation on Hadoop We have implemented the system described in Section SECREF5 in Java and Scala on a Hadoop YARN scheduled cluster, leveraging Slider BIBREF10 and Spark BIBREF11 . Our end-to-end implementation of training carries out four steps: vocabulary generation, data set preprocessing, training, and vector export. We next review the details of each of these steps. Throughout, all data, including the initial training data, its preprocessed version, the exported vectors are all stored in the Hadoop Distributed File System (HDFS). We remark that although our compute environment is currently based on Hadoop and Spark, other distributed computational frameworks such as the recently released TensorFlow could also serve as a platform for implementing the proposed system. Main steps This step entails counting occurrences of all words in the training corpus and sorting them in order of decreasing occurrence. As mentioned, the vocabulary is taken to be the INLINEFORM0 most frequently occurring words, that occur at least some number INLINEFORM1 times. It is implemented in Spark as a straight-forward map-reduce job. In this step, each word in the training corpus is replaced by its index in the sorted vocabulary generated in the preceding phase (the ordering INLINEFORM0 referred to in Section SECREF5 ). This is also implemented in Spark using a low overhead in-memory key-value store to store the mapping from vocabulary words to their indices. Our implementation hashes words to 64 bit keys to simplify the key-value store. Referring to the system description in Section SECREF5 (and Figure FIGREF18 ), the parameter server portion is implemented in Java, with the RPC layer based on the Netty client-server library BIBREF30 . The RPC layer of the client is implemented similarly. The higher layers of the client (i/o, minibatch formation, partial dot product aggregation, linear combination weight computation) are implemented in Scala and Spark. In particular, the clients are created and connect to the PS shards from within an RDD mapPartitions method applied to the preprocessed data set that is converted to an RDD via the standard Spark file-to-RDD api. At the start of training, the PS shards are launched from a gateway node onto Hadoop cluster nodes using the Apache Slider application that has been designed to launch arbitrary applications onto a Hadoop YARN scheduled cluster. The IP addresses and ports of the respective PS shards are extracted and passed to the Spark executors (which in turn use them to connect respective clients to the PS shards) as a file via the standard spark-submit command line executed on a gateway node. Each mapPartitions operation in the clients is multi-threaded with a configurable number of threads handling the processing of the input data and the interaction with the PS shards. These threads share the same connections with the PS shards. The PS shards are also multi-threaded based on Netty, wherein a configurable number of worker threads process incoming dotprod and adjust requests from multiple connections in parallel. Each shard has a connection to each Spark executor. The word vector portions are stored in each PS shard in arrays of primitive floats, and as mentioned, their indices in the arrays coincide with the indices of their corresponding words in the vocabulary. In the steady state, the PS allocates no new data structures to avoid garbage collection. Objects are created only during start-up, and possibly during the fairly infrequent connection setups, as managed by the Netty RPC layer. In this final step, carried out after training has completed, the partial vectors stored in each PS shard are aggregated and joined with their respective words in the vocabulary and stored together as a text file in HDFS. Again, we leverage Spark to carry out this operation in a distributed fashion, by creating an RDD from the vocabulary and using mapPartitions to launch clients that get the partial vectors from the PS shards for the respective partition of vocabulary words, combine the partial vectors and save corresponding word and vectors pairs to HDFS. Training step throughput To give an idea of the kind of training throughput we can achieve with this system, the following is one configuration we have used for training the sponsored search application on our Hadoop cluster: Algorithm parameters: 200 million word vocabulary, 5 negative examples, maximum of 10 window size Training system parameters: 200 Spark executors, 8 threads per spark executor, minibatch size of 200 yields the following training throughputs in minibatch input words per second (see Section SECREF3 for the definition of input word), for varying PS shards and vector dimensions: For this data set and algorithm parameters, each input word has associated with it an average of about 20 positive context words and negative examples, so that the system is effectively updating about 21 times the third column in the table number of vectors per second. For the first line of the table, for example, this is over 33 million 300 dimensional vector updates per second. The conventional parameter server approach would require a total bandwidth of about 300 Gbps (30 server shards would be needed for this) to and from the parameter server for similar training throughput. This is close to 10 percent of the fabric bandwidth in our production data center. The proposed system requires only about 15 Gbps, making it far more practical for deployment to production in a shared data center, especially in light of the training latency for which this bandwidth must be sustained, which is about two days for data sets of interest. Even more extreme is the last line of the table (the 1000 dim. case), for which the equivalent throughput conventional system would require 800 Gbps vs. 20 Gbps for the proposed system. One important property of the training system is that its throughput at any given time is limited by the throughput of the slowest PS shard at that time. With this in mind, we use the YARN scheduler resource reservation capability exported through Slider to minimize resource contention on all of the machines to which the PS shards are assigned, thereby achieving higher sustained throughput. Another important property of the training system is that increasing the number of shards beyond some point is not helpful since the vector portions handled by each shard become so small that the random access memory transaction bandwidth (number of random cache lines per second) becomes the bottle neck. This explains the limited throughput scaling with PS shards for the 300 dimensional case above. Further optimization of the vector-store of each PS shard with respect to caching and non-uniform memory access might be beneficial. We leave this for future investigation. Evaluation & Deployment In this section, we provide evidence that the vectors trained by the proposed distributed system are of high quality, even with fairly aggressive parallelism during training. We also show bucket test results on live web search traffic that compare query-ad matching performance of our large-vocabulary model to the one trained using single-machine implementation, which led to the decision to deploy the proposed system in production in late 2015. Benchmark data set To compare the proposed distributed system we trained vectors on a publicly available data set collected and processed by the script 'demo-train-big-model-v1-compute-only.sh' from the open-source package of BIBREF0 . This script collects a variety of publicly available text corpuses and processes them using the algorithm described in BIBREF0 to coalesce sufficiently co-occurring words into phrases. We then randomly shuffled the order of sentences (delimited by new line) in the data set, retaining order of words within each sentence. The resulting data set has about 8 billion words and yields a vocabulary of about 7 million words and phrases (based on a cut off of 5 occurrences in the data set). We evaluated accuracy on the phrase analogies in the 'question-phrases.txt' file and also evaluated Spearman's rank correlation with respect to the editorial evaluation of semantic relatedness of pairs of words in the well known wordsim-353 collection BIBREF31 . The results are shown in Table TABREF34 . The first column shows results for the single machine implementation of BIBREF0 , the second for a 'low parallelism' configuration of our system using 50 Spark executors, minibatch size of 1, and 1 thread per executor, and the third column for a 'high parallelism' configuration again with 50 executors, but with minibatch size increased to 50 and 8 threads per executor. The various systems were run using the skipgram variant with 500 dimensional vectors, maximum window size of 20 (10 in each direction), 5 negative examples, subsample ratio of 1e-6 (see BIBREF0 ), initial learning rate of 0.01875, and 3 iterations over the data set. It can be seen that the vectors trained by the 'high parallelism' configuration of the proposed system, which is the closest to the configurations required for acceptable training latency in the large-scale sponsored search application, suffers only a modest loss in quality as measured by these tests. Note that this data set is more challenging for our system than the sponsored search data set, as it is less sparse and there is on average more overlap between words in different minibatches. In fact, if we attempt to increase the parallelism to 200 executors as was used for the training of the vectors described in the next subsection, training fails to converge altogether. We are unsure why our system yields better results than the implementation of BIBREF0 on the wordsim test, yet worse scores on the analogies test. We also note that the analogies test scores reported here involve computing the closest vector for each analogy “question” over the entire vocabulary and not just over the 1M most frequent words, as in the script 'demo-train-big-model-v1-compute-only.sh' of BIBREF0 . Sponsored Search data set We conducted qualitative evaluation in the context of sponsored search application described in Section SECREF2 . Figure FIGREF47 shows the queries whose trained vectors were found to be most similar (out of 133M queries) to an example ad vector, along with the respective cosine similarities to the ad vector. The figure shows the ten most and least similar among the 800 most similar queries, where we note that the ten least similar queries can still be considered to be fairly semantically similar. This particular set of vectors was trained for a vocabulary of 200M generalized words using the 300 dimensional vector, 15 PS shard settings described in Section SECREF41 . We found the vector quality demonstrated in Figure FIGREF47 to be the norm based on inspections of similar matchings of query vectors to a number of ad vectors. We also compared the cosine similarities for pairs of vectors trained using the proposed distributed system and for corresponding vector pairs trained using the open–source implementation of BIBREF0 , again on a large search session data set. The former was trained using a vocabulary of 200 million generalized words while the latter was trained using about 90 million words which is the most that could fit onto a specialized large memory machine. For a set of 7,560 generalized word pairs with words common to the vocabularies trained by the respective systems we found very good agreement in cosine similarities between the corresponding vectors from the two systems, with over 50% of word pairs having cosine similarity differences less than 0.06, and 91% of word pairs having differences less than 0.1. Online A/B tests Following successful offline evaluation of the proposed distributed system, in the following set of experiments we conducted tests on live web search traffic. We ran two bucket tests, each on INLINEFORM0 of search traffic, where we compared query-ad matches produced by training query and ad vectors using search session data set spanning 9 months of search data. One model was trained using implementation from BIBREF0 and the other was trained using the proposed distributed system. Both buckets were compared against control bucket, which employed a collection of different broad match techniques used in production at the time of the test. Each of the online tests were run for 10 days, one after another, more than a month apart. The results of the tests were reported in terms of query coverage (portion of queries for which ads were shown), Auction Depth (number of ads per query that made it into an auction) click-through rate (CTR, or number of ad clicks divided by number of ad impressions), click yield (number of clicks), and revenue. Instead of the actual numbers we show relative improvement over control metrics. Both methods produced a separate query-ad match dictionary by finding INLINEFORM0 nearest ads in the embedding space for each search query from our vocabulary, and keeping only ads with cosine similarity above INLINEFORM1 . The threshold was chosen based on editorial results. To implement the bucket test the query-ad match dictionary is produced offline and cached in the ad server memory such that ads can be retrieved in real-time given an input query. Post retrieval, a click model is used to estimate the clickability of the ad for that query and the ad is sent into an auction, where it competes with ads retrieved by other broad match algorithms. It gets to be shown to the user in case it wins one of the ad slots on the page. The first A/B test was conducted to evaluate the value of query-ad dictionary produced by single-machine implementation. This implementation could scale up to a model with 50M query vectors. It was compared against control bucket that ran a production broad match module. Following positive A/B test metrics, with improvements in coverage and revenue, presented in the first row of Table TABREF48 , the dictionary was launched to production and incorporated into the existing broad match production model. The second A/B test was conducted to evaluate incremental improvement over the single machine solution, which was already launched in production. The model contained vectors for 133M queries. As it can be observed in the second row of Table TABREF48 , the distributed solution provided additional 2.44% query coverage and additional 9.39% revenue, without degrading user experience (CTR remained neutral). This strong monetization potential of our distributed system for training large vocabularies of query and ad vectors led to its deployment in our sponsored search platform. The model is being retrained on a weekly basis, automated via Apache Oozie BIBREF32 , and is currently serving more than INLINEFORM0 of all broad matches. Conclusion In this paper, we presented a novel scalable word2vec training system that, unlike available systems, can train semantically accurate vectors for hundreds of millions of vocabulary words with training latency and network bandwidth usage suitable for regular training on commodity clusters. We motivated the usefulness of large vocabulary word2vec training with a sponsored search application involving generalized “words” corresponding to queries, ads, and hyperlinks, for which the proposed system has been deployed to production. The results on both benchmark data sets and online A/B tests strongly indicate the benefits of the proposed approach. [ht] INLINEFORM0 .dotprod( INLINEFORM1 , INLINEFORM2 , long INLINEFORM3 ) INLINEFORM0 Random Number Generator initialized with INLINEFORM1 INLINEFORM2 ; INLINEFORM3 iterate over words in minibatch INLINEFORM4 INLINEFORM5 iterate over words in context INLINEFORM6 INLINEFORM7 INLINEFORM8 ; INLINEFORM9 generate INLINEFORM10 random negative examples for current output word INLINEFORM11 Array( INLINEFORM12 negative word indices INLINEFORM13 , generated using INLINEFORM14 ) compute partial dot products for positive and negative examples INLINEFORM15 INLINEFORM16 INLINEFORM17 send results back to client INLINEFORM18 Server side computation - dotprod. [ht] void INLINEFORM19 .adjust( INLINEFORM20 , INLINEFORM21 , INLINEFORM22 , INLINEFORM23 , INLINEFORM24 ) INLINEFORM0 Random Number Generator initialized with INLINEFORM1 INLINEFORM2 ; INLINEFORM3 ; INLINEFORM4 ; INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 ; INLINEFORM11 regenerate random negative examples INLINEFORM12 Array( INLINEFORM13 negative word indices INLINEFORM14 , generated using INLINEFORM15 ) compute partial gradient updates and store in scratch area INLINEFORM16 ; INLINEFORM17 INLINEFORM18 INLINEFORM19 ; INLINEFORM20 add partial gradient updates to partial vectors in store all INLINEFORM21 INLINEFORM22 ; INLINEFORM23 Server side computation - adjust. [ht] InputinputOutputoutput INLINEFORM0 : Vectors for vocabulary words INLINEFORM1 = # of parameter servers needed for INLINEFORM2 words Launch parameter servers INLINEFORM3 Initialize vectors in PS server iteration INLINEFORM4 INLINEFORM5 UnprocessedPartitions INLINEFORM6 INLINEFORM7 each executor, in parallel UnprocessedPartitions is non-empty INLINEFORM8 INLINEFORM9 next partition in UnprocessedPartitions Launch client INLINEFORM10 connected to INLINEFORM11 INLINEFORM12 INLINEFORM13 minibatches in INLINEFORM14 INLINEFORM15 = randomly select a seed INLINEFORM16 INLINEFORM17 Array of word indices in INLINEFORM18 INLINEFORM19 INLINEFORM20 Array of Arrays of context word indices of words in INLINEFORM21 client broadcasts word indices to shards which compute partial dot products in parallel, returning results to client INLINEFORM22 INLINEFORM23 , in parallel INLINEFORM24 = INLINEFORM25 .dotprod( INLINEFORM26 , INLINEFORM27 , INLINEFORM28 ) aggregate partial dot products and compute linear coefficients for gradient update INLINEFORM29 INLINEFORM30 ; INLINEFORM31 client broadcasts coefficients to shards which compute partial vector linear combinations INLINEFORM32 INLINEFORM33 , in parallel INLINEFORM34 .adjust( INLINEFORM35 , INLINEFORM36 , INLINEFORM37 , INLINEFORM38 , INLINEFORM39 ) input vectors INLINEFORM40 } from INLINEFORM41 Grid based word2vec algorithm.
Unanswerable
fb56743e942883d7e74a73c70bd11016acddc348
fb56743e942883d7e74a73c70bd11016acddc348_0
Q: What data do they train the language models on? Text: Introduction The sequence-to-sequence (seq2seq) model proposed in BIBREF0 , BIBREF1 , BIBREF2 is a neural architecture for performing sequence classification and later adopted to perform speech recognition in BIBREF3 , BIBREF4 , BIBREF5 . The model allows to integrate the main blocks of ASR such as acoustic model, alignment model and language model into a single framework. The recent ASR advancements in connectionist temporal classification (CTC) BIBREF5 , BIBREF4 and attention BIBREF3 , BIBREF6 based approaches has created larger interest in speech community to use seq2seq models. To leverage performance gains from this model as similar or better to conventional hybrid RNN/DNN-HMM models requires a huge amount of data BIBREF7 . Intuitively, this is due to the wide-range role of the model in performing alignment and language modeling along with acoustic to character label mapping at each iteration. In this paper, we explore the multilingual training approaches BIBREF8 , BIBREF9 , BIBREF10 used in hybrid DNN/RNN-HMMs to incorporate them into the seq2seq models. In a context of applications of multilingual approaches towards seq2seq model, CTC is mainly used instead of the attention models. A multilingual CTC is proposed in BIBREF11 , which uses a universal phoneset, FST decoder and language model. The authors also use linear hidden unit contribution (LHUC) BIBREF12 technique to rescale the hidden unit outputs for each language as a way to adapt to a particular language. Another work BIBREF13 on multilingual CTC shows the importance of language adaptive vectors as auxiliary input to the encoder in multilingual CTC model. The decoder used here is a simple INLINEFORM0 decoder. An extensive analysis on multilingual CTC mainly focusing on improving under limited data condition is performed in BIBREF14 . Here, the authors use a word level FST decoder integrated with CTC during decoding. On a similar front, attention models are explored within a multilingual setup in BIBREF15 , BIBREF16 based on attention-based seq2seq to build a model from multiple languages. The data is just combined together assuming the target languages are seen during the training. And, hence no special transfer learning techniques were used here to address the unseen languages during training. The main motivation and contribution behind this work is as follows: Sequence-to-Sequence Model In this work, we use the attention based approach BIBREF1 as it provides an effective methodology to perform sequence-to-sequence (seq2seq) training. Considering the limitations of attention in performing monotonic alignment BIBREF18 , BIBREF19 , we choose to use CTC loss function to aid the attention mechanism in both training and decoding. The basic network architecture is shown in Fig. FIGREF7 . Let INLINEFORM0 be a INLINEFORM1 -length speech feature sequence and INLINEFORM2 be a INLINEFORM3 -length grapheme sequence. A multi-objective learning framework INLINEFORM4 proposed in BIBREF17 is used in this work to unify attention loss INLINEFORM5 and CTC loss INLINEFORM6 with a linear interpolation weight INLINEFORM7 , as follows: DISPLAYFORM0 The unified model allows to obtain both monotonicity and effective sequence level training. INLINEFORM0 represents the posterior probability of character label sequence INLINEFORM1 w.r.t input sequence INLINEFORM2 based on the attention approach, which is decomposed with the probabilistic chain rule, as follows: DISPLAYFORM0 where INLINEFORM0 denotes the ground truth history. Detailed explanations about the attention mechanism is described later. Similarly, INLINEFORM0 represents the posterior probability based on the CTC approach. DISPLAYFORM0 where INLINEFORM0 is a CTC state sequence composed of the original grapheme set and the additional blank symbol. INLINEFORM1 is a set of all possible sequences given the character sequence INLINEFORM2 . The following paragraphs explain the encoder, attention decoder, CTC, and joint decoding used in our approach. Data details and experimental setup In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation. 80 dimensional Mel-filterbank (fbank) features are then extracted from the speech samples using a sliding window of size 25 ms with 10ms stride. KALDI toolkit BIBREF24 is used to perform the feature processing. The fbank features are then fed to a seq2seq model with the following configuration: The Bi-RNN BIBREF25 models mentioned above uses a LSTM BIBREF26 cell followed by a projection layer (BLSTMP). In our experiments below, we use only a character-level seq2seq model trained by CTC and attention decoder. Thus in the following experiments we intend to use character error rate (% CER) as a suitable measure to analyze the model performance. However, in section SECREF26 we integrate a character-level RNNLM BIBREF27 with seq2seq model externally and showcase the performance in terms of word error rate (% WER). In this case the words are obtained by concatenating the characters and the space together for scoring with reference words. All experiments are implemented in ESPnet, end-to-end speech processing toolkit BIBREF28 . Multilingual experiments Multilingual approaches used in hybrid RNN/DNN-HMM systems BIBREF10 have been used for for tackling the problem of low-resource data condition. Some of these approaches include language adaptive training and shared layer retraining BIBREF29 . Among them, the most benefited method is the parameter sharing technique BIBREF10 . To incorporate the former approach into encoder, CTC and attention decoder model, we performed the following experiments: Stage 0 - Naive approach In this approach, the model is first trained with 10 multiple languages as denoted in table TABREF14 approximating to 600 hours of training data. data from all languages available during training is used to build a single seq2seq model. The model is trained with a character label set composed of characters from all languages including both train and target set as mentioned in table TABREF14 . The model provides better generalization across languages. Languages with limited data when trained with other languages allows them to be robust and helps in improving the recognition performance. In spite of being simple, the model has limitations in keeping the target language data unseen during training. Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance. Except Pashto, Georgian and Tokpisin, the multilingual VGG-BLSTM model gave 8.8 % absolute gain in average over monolingual model. In case of multilingual BLSTMP, except Pashto and Georgian an absolute gain of 5.0 % in average is observed over monolingual model. Even though the VGG-BLSTM gave improvements, we were not able to perform stage-1 and stage-2 retraining with it due to time constraints. Thus, we proceed further with multilingual BLSTMP model for retraining experiments tabulated below. Stage 1 - Retraining decoder only To alleviate the limitation in the previous approach, the final layer of the seq2seq model which is mainly responsible for classification is retrained to the target language. In previous works BIBREF10 , BIBREF29 related to hybrid DNN/RNN models and CTC based models BIBREF11 , BIBREF14 the softmax layer is only adapted. However in our case, the attention decoder and CTC decoder both have to be retrained to the target language. This means the CTC and attention layers are only updated for gradients during this stage. We found using SGD optimizer with initial learning rate of INLINEFORM0 works better for retraining compared to AdaDelta. The learning rate is decayed in this training at a factor of INLINEFORM0 if there is a drop in validation accuracy. Table TABREF20 shows the performance of simply retraining the last layer using a single target language Assamese. Stage 2 - Finetuning both encoder and decoder Based on the observations from stage-1 model in section SECREF22 , we found that simply retraining the decoder towards a target language resulted in degrading %CER the performance from 45.6 to 61.3. This is mainly due to the difference in distribution across encoder and decoder. So, to alleviate this difference the encoder and decoder is once again retrained or fine-tuned using the model from stage-1. The optimizer used here is SGD as in stage-1, but the initial learning rate is kept to INLINEFORM0 and decayed based on validation performance. The resulting model gave an absolute gain of 1.6% when finetuned a multilingual model after 4th epoch. Also, finetuning a model after 15th epoch gave an absolute gain of 4.3%. To further investigate the performance of this approach across different target data sizes, we split the train set into INLINEFORM0 5 hours, INLINEFORM1 10 hours, INLINEFORM2 20 hours and INLINEFORM3 full set. Since, in this approach the model is only finetuned by initializing from stage-1 model, the model architecture is fixed for all data sizes. Figure FIGREF23 shows the effectiveness of finetuning both encoder and decoder. The gains from 5 to 10 hours was more compared to 20 hours to full set. Table TABREF25 tabulates the % CER obtained by retraining the stage-1 model with INLINEFORM0 full set of target language data. An absolute gain is observed using stage-2 retraining across all languages compared to monolingual model. Multilingual RNNLM In an ASR system, a language model (LM) takes an important role by incorporating external knowledge into the system. Conventional ASR systems combine an LM with an acoustic model by FST giving a huge performance gain. This trend is also shown in general including hybrid ASR systems and neural network-based sequence-to-sequence ASR systems. The following experiments show a benefit of using a language model in decoding with the previous stage-2 transferred models. Although the performance gains in %CER are also generally observed over all target languages, the improvement in %WER was more distinctive. The results shown in the following Fig. FIGREF27 are in %WER. “whole” in each figure means we used all the available data for the target language as full set explained before. We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences. We use all available paired text in the corresponding target language to train the LM for the language. No external text data were used. All language models are trained separately from the seq2seq models. When building dictionary, we combined all the characters over all 15 languages mentioned in table TABREF14 to make them work with transferred models. Regardless of the amount of data used for transfer learning, the RNNLM provides consistent gains across all languages over different data sizes. As explained already, language models were trained separately and used to decode jointly with seq2seq models. The intuition behind it is to use the separately trained language model as a complementary component that works with a implicit language model within a seq2seq decoder. The way of RNNLM assisting decoding follows the equation below: DISPLAYFORM0 INLINEFORM0 is a scaling factor that combines the scores from a joint decoding eq.( EQREF13 ) with RNN-LM, denoted as INLINEFORM1 . This approach is called shallow fusion. Our experiments for target languages show that the gains from adding RNNLM are consistent regardless of the amount of data used for transfer learning. In other words, in Figure FIGREF27 , the gap between two lines are almost consistent over all languages. Also, we observe the gain we get by adding RNN-LM in decoding is large. For example, in the case of assamese, the gain by RNN-LM in decoding with a model retrained on 5 hours of the target language data is almost comparable with the model stage-2 retrained with 20 hours of target language data. On average, absolute gain INLINEFORM0 6% is obtained across all target languages as noted in table TABREF28 . Conclusion In this work, we have shown the importance of transfer learning approach such as stage-2 multilingual retraining in a seq2seq model setting. Also, careful selection of train and target languages from BABEL provide a wide variety in recognition performance (%CER) and helps in understanding the efficacy of seq2seq model. The experiments using character-based RNNLM showed the importance of language model in boosting recognition performance (%WER) over all different hours of target data available for transfer learning. Table TABREF25 and TABREF28 summarizes, the effect of these techniques in terms of %CER and %WER. These methods also show their flexibility in incorporating it in attention and CTC based seq2seq model without compromising loss in performance. Future work We could use better architectures such as VGG-BLSTM as a multilingual prior model before transferring them to a new target language by performing stage-2 retraining. The naive multilingual approach can be improved by including language vectors as input or target during training to reduce the confusions. Also, investigation of multilingual bottleneck features BIBREF30 for seq2seq model can provide better performance. Apart from using the character level language model as in this work, a word level RNNLM can be connected during decoding to further improve %WER. The attention based decoder can be aided with the help of RNNLM using cold fusion approach during training to attain a better-trained model. In near future, we will incorporate all the above techniques to get comparable performance with the state-of-the-art hybrid DNN/RNN-HMM systems.
BABEL speech corpus
093dd1e403eac146bcd19b51a2ace316b36c6264
093dd1e403eac146bcd19b51a2ace316b36c6264_0
Q: Do they report BLEU scores? Text: Introduction The sequence-to-sequence (seq2seq) model proposed in BIBREF0 , BIBREF1 , BIBREF2 is a neural architecture for performing sequence classification and later adopted to perform speech recognition in BIBREF3 , BIBREF4 , BIBREF5 . The model allows to integrate the main blocks of ASR such as acoustic model, alignment model and language model into a single framework. The recent ASR advancements in connectionist temporal classification (CTC) BIBREF5 , BIBREF4 and attention BIBREF3 , BIBREF6 based approaches has created larger interest in speech community to use seq2seq models. To leverage performance gains from this model as similar or better to conventional hybrid RNN/DNN-HMM models requires a huge amount of data BIBREF7 . Intuitively, this is due to the wide-range role of the model in performing alignment and language modeling along with acoustic to character label mapping at each iteration. In this paper, we explore the multilingual training approaches BIBREF8 , BIBREF9 , BIBREF10 used in hybrid DNN/RNN-HMMs to incorporate them into the seq2seq models. In a context of applications of multilingual approaches towards seq2seq model, CTC is mainly used instead of the attention models. A multilingual CTC is proposed in BIBREF11 , which uses a universal phoneset, FST decoder and language model. The authors also use linear hidden unit contribution (LHUC) BIBREF12 technique to rescale the hidden unit outputs for each language as a way to adapt to a particular language. Another work BIBREF13 on multilingual CTC shows the importance of language adaptive vectors as auxiliary input to the encoder in multilingual CTC model. The decoder used here is a simple INLINEFORM0 decoder. An extensive analysis on multilingual CTC mainly focusing on improving under limited data condition is performed in BIBREF14 . Here, the authors use a word level FST decoder integrated with CTC during decoding. On a similar front, attention models are explored within a multilingual setup in BIBREF15 , BIBREF16 based on attention-based seq2seq to build a model from multiple languages. The data is just combined together assuming the target languages are seen during the training. And, hence no special transfer learning techniques were used here to address the unseen languages during training. The main motivation and contribution behind this work is as follows: Sequence-to-Sequence Model In this work, we use the attention based approach BIBREF1 as it provides an effective methodology to perform sequence-to-sequence (seq2seq) training. Considering the limitations of attention in performing monotonic alignment BIBREF18 , BIBREF19 , we choose to use CTC loss function to aid the attention mechanism in both training and decoding. The basic network architecture is shown in Fig. FIGREF7 . Let INLINEFORM0 be a INLINEFORM1 -length speech feature sequence and INLINEFORM2 be a INLINEFORM3 -length grapheme sequence. A multi-objective learning framework INLINEFORM4 proposed in BIBREF17 is used in this work to unify attention loss INLINEFORM5 and CTC loss INLINEFORM6 with a linear interpolation weight INLINEFORM7 , as follows: DISPLAYFORM0 The unified model allows to obtain both monotonicity and effective sequence level training. INLINEFORM0 represents the posterior probability of character label sequence INLINEFORM1 w.r.t input sequence INLINEFORM2 based on the attention approach, which is decomposed with the probabilistic chain rule, as follows: DISPLAYFORM0 where INLINEFORM0 denotes the ground truth history. Detailed explanations about the attention mechanism is described later. Similarly, INLINEFORM0 represents the posterior probability based on the CTC approach. DISPLAYFORM0 where INLINEFORM0 is a CTC state sequence composed of the original grapheme set and the additional blank symbol. INLINEFORM1 is a set of all possible sequences given the character sequence INLINEFORM2 . The following paragraphs explain the encoder, attention decoder, CTC, and joint decoding used in our approach. Data details and experimental setup In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation. 80 dimensional Mel-filterbank (fbank) features are then extracted from the speech samples using a sliding window of size 25 ms with 10ms stride. KALDI toolkit BIBREF24 is used to perform the feature processing. The fbank features are then fed to a seq2seq model with the following configuration: The Bi-RNN BIBREF25 models mentioned above uses a LSTM BIBREF26 cell followed by a projection layer (BLSTMP). In our experiments below, we use only a character-level seq2seq model trained by CTC and attention decoder. Thus in the following experiments we intend to use character error rate (% CER) as a suitable measure to analyze the model performance. However, in section SECREF26 we integrate a character-level RNNLM BIBREF27 with seq2seq model externally and showcase the performance in terms of word error rate (% WER). In this case the words are obtained by concatenating the characters and the space together for scoring with reference words. All experiments are implemented in ESPnet, end-to-end speech processing toolkit BIBREF28 . Multilingual experiments Multilingual approaches used in hybrid RNN/DNN-HMM systems BIBREF10 have been used for for tackling the problem of low-resource data condition. Some of these approaches include language adaptive training and shared layer retraining BIBREF29 . Among them, the most benefited method is the parameter sharing technique BIBREF10 . To incorporate the former approach into encoder, CTC and attention decoder model, we performed the following experiments: Stage 0 - Naive approach In this approach, the model is first trained with 10 multiple languages as denoted in table TABREF14 approximating to 600 hours of training data. data from all languages available during training is used to build a single seq2seq model. The model is trained with a character label set composed of characters from all languages including both train and target set as mentioned in table TABREF14 . The model provides better generalization across languages. Languages with limited data when trained with other languages allows them to be robust and helps in improving the recognition performance. In spite of being simple, the model has limitations in keeping the target language data unseen during training. Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance. Except Pashto, Georgian and Tokpisin, the multilingual VGG-BLSTM model gave 8.8 % absolute gain in average over monolingual model. In case of multilingual BLSTMP, except Pashto and Georgian an absolute gain of 5.0 % in average is observed over monolingual model. Even though the VGG-BLSTM gave improvements, we were not able to perform stage-1 and stage-2 retraining with it due to time constraints. Thus, we proceed further with multilingual BLSTMP model for retraining experiments tabulated below. Stage 1 - Retraining decoder only To alleviate the limitation in the previous approach, the final layer of the seq2seq model which is mainly responsible for classification is retrained to the target language. In previous works BIBREF10 , BIBREF29 related to hybrid DNN/RNN models and CTC based models BIBREF11 , BIBREF14 the softmax layer is only adapted. However in our case, the attention decoder and CTC decoder both have to be retrained to the target language. This means the CTC and attention layers are only updated for gradients during this stage. We found using SGD optimizer with initial learning rate of INLINEFORM0 works better for retraining compared to AdaDelta. The learning rate is decayed in this training at a factor of INLINEFORM0 if there is a drop in validation accuracy. Table TABREF20 shows the performance of simply retraining the last layer using a single target language Assamese. Stage 2 - Finetuning both encoder and decoder Based on the observations from stage-1 model in section SECREF22 , we found that simply retraining the decoder towards a target language resulted in degrading %CER the performance from 45.6 to 61.3. This is mainly due to the difference in distribution across encoder and decoder. So, to alleviate this difference the encoder and decoder is once again retrained or fine-tuned using the model from stage-1. The optimizer used here is SGD as in stage-1, but the initial learning rate is kept to INLINEFORM0 and decayed based on validation performance. The resulting model gave an absolute gain of 1.6% when finetuned a multilingual model after 4th epoch. Also, finetuning a model after 15th epoch gave an absolute gain of 4.3%. To further investigate the performance of this approach across different target data sizes, we split the train set into INLINEFORM0 5 hours, INLINEFORM1 10 hours, INLINEFORM2 20 hours and INLINEFORM3 full set. Since, in this approach the model is only finetuned by initializing from stage-1 model, the model architecture is fixed for all data sizes. Figure FIGREF23 shows the effectiveness of finetuning both encoder and decoder. The gains from 5 to 10 hours was more compared to 20 hours to full set. Table TABREF25 tabulates the % CER obtained by retraining the stage-1 model with INLINEFORM0 full set of target language data. An absolute gain is observed using stage-2 retraining across all languages compared to monolingual model. Multilingual RNNLM In an ASR system, a language model (LM) takes an important role by incorporating external knowledge into the system. Conventional ASR systems combine an LM with an acoustic model by FST giving a huge performance gain. This trend is also shown in general including hybrid ASR systems and neural network-based sequence-to-sequence ASR systems. The following experiments show a benefit of using a language model in decoding with the previous stage-2 transferred models. Although the performance gains in %CER are also generally observed over all target languages, the improvement in %WER was more distinctive. The results shown in the following Fig. FIGREF27 are in %WER. “whole” in each figure means we used all the available data for the target language as full set explained before. We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences. We use all available paired text in the corresponding target language to train the LM for the language. No external text data were used. All language models are trained separately from the seq2seq models. When building dictionary, we combined all the characters over all 15 languages mentioned in table TABREF14 to make them work with transferred models. Regardless of the amount of data used for transfer learning, the RNNLM provides consistent gains across all languages over different data sizes. As explained already, language models were trained separately and used to decode jointly with seq2seq models. The intuition behind it is to use the separately trained language model as a complementary component that works with a implicit language model within a seq2seq decoder. The way of RNNLM assisting decoding follows the equation below: DISPLAYFORM0 INLINEFORM0 is a scaling factor that combines the scores from a joint decoding eq.( EQREF13 ) with RNN-LM, denoted as INLINEFORM1 . This approach is called shallow fusion. Our experiments for target languages show that the gains from adding RNNLM are consistent regardless of the amount of data used for transfer learning. In other words, in Figure FIGREF27 , the gap between two lines are almost consistent over all languages. Also, we observe the gain we get by adding RNN-LM in decoding is large. For example, in the case of assamese, the gain by RNN-LM in decoding with a model retrained on 5 hours of the target language data is almost comparable with the model stage-2 retrained with 20 hours of target language data. On average, absolute gain INLINEFORM0 6% is obtained across all target languages as noted in table TABREF28 . Conclusion In this work, we have shown the importance of transfer learning approach such as stage-2 multilingual retraining in a seq2seq model setting. Also, careful selection of train and target languages from BABEL provide a wide variety in recognition performance (%CER) and helps in understanding the efficacy of seq2seq model. The experiments using character-based RNNLM showed the importance of language model in boosting recognition performance (%WER) over all different hours of target data available for transfer learning. Table TABREF25 and TABREF28 summarizes, the effect of these techniques in terms of %CER and %WER. These methods also show their flexibility in incorporating it in attention and CTC based seq2seq model without compromising loss in performance. Future work We could use better architectures such as VGG-BLSTM as a multilingual prior model before transferring them to a new target language by performing stage-2 retraining. The naive multilingual approach can be improved by including language vectors as input or target during training to reduce the confusions. Also, investigation of multilingual bottleneck features BIBREF30 for seq2seq model can provide better performance. Apart from using the character level language model as in this work, a word level RNNLM can be connected during decoding to further improve %WER. The attention based decoder can be aided with the help of RNNLM using cold fusion approach during training to attain a better-trained model. In near future, we will incorporate all the above techniques to get comparable performance with the state-of-the-art hybrid DNN/RNN-HMM systems.
No
1adbdb5f08d67d8b05328ccc86d297ac01bf076c
1adbdb5f08d67d8b05328ccc86d297ac01bf076c_0
Q: What languages do they use? Text: Introduction The sequence-to-sequence (seq2seq) model proposed in BIBREF0 , BIBREF1 , BIBREF2 is a neural architecture for performing sequence classification and later adopted to perform speech recognition in BIBREF3 , BIBREF4 , BIBREF5 . The model allows to integrate the main blocks of ASR such as acoustic model, alignment model and language model into a single framework. The recent ASR advancements in connectionist temporal classification (CTC) BIBREF5 , BIBREF4 and attention BIBREF3 , BIBREF6 based approaches has created larger interest in speech community to use seq2seq models. To leverage performance gains from this model as similar or better to conventional hybrid RNN/DNN-HMM models requires a huge amount of data BIBREF7 . Intuitively, this is due to the wide-range role of the model in performing alignment and language modeling along with acoustic to character label mapping at each iteration. In this paper, we explore the multilingual training approaches BIBREF8 , BIBREF9 , BIBREF10 used in hybrid DNN/RNN-HMMs to incorporate them into the seq2seq models. In a context of applications of multilingual approaches towards seq2seq model, CTC is mainly used instead of the attention models. A multilingual CTC is proposed in BIBREF11 , which uses a universal phoneset, FST decoder and language model. The authors also use linear hidden unit contribution (LHUC) BIBREF12 technique to rescale the hidden unit outputs for each language as a way to adapt to a particular language. Another work BIBREF13 on multilingual CTC shows the importance of language adaptive vectors as auxiliary input to the encoder in multilingual CTC model. The decoder used here is a simple INLINEFORM0 decoder. An extensive analysis on multilingual CTC mainly focusing on improving under limited data condition is performed in BIBREF14 . Here, the authors use a word level FST decoder integrated with CTC during decoding. On a similar front, attention models are explored within a multilingual setup in BIBREF15 , BIBREF16 based on attention-based seq2seq to build a model from multiple languages. The data is just combined together assuming the target languages are seen during the training. And, hence no special transfer learning techniques were used here to address the unseen languages during training. The main motivation and contribution behind this work is as follows: Sequence-to-Sequence Model In this work, we use the attention based approach BIBREF1 as it provides an effective methodology to perform sequence-to-sequence (seq2seq) training. Considering the limitations of attention in performing monotonic alignment BIBREF18 , BIBREF19 , we choose to use CTC loss function to aid the attention mechanism in both training and decoding. The basic network architecture is shown in Fig. FIGREF7 . Let INLINEFORM0 be a INLINEFORM1 -length speech feature sequence and INLINEFORM2 be a INLINEFORM3 -length grapheme sequence. A multi-objective learning framework INLINEFORM4 proposed in BIBREF17 is used in this work to unify attention loss INLINEFORM5 and CTC loss INLINEFORM6 with a linear interpolation weight INLINEFORM7 , as follows: DISPLAYFORM0 The unified model allows to obtain both monotonicity and effective sequence level training. INLINEFORM0 represents the posterior probability of character label sequence INLINEFORM1 w.r.t input sequence INLINEFORM2 based on the attention approach, which is decomposed with the probabilistic chain rule, as follows: DISPLAYFORM0 where INLINEFORM0 denotes the ground truth history. Detailed explanations about the attention mechanism is described later. Similarly, INLINEFORM0 represents the posterior probability based on the CTC approach. DISPLAYFORM0 where INLINEFORM0 is a CTC state sequence composed of the original grapheme set and the additional blank symbol. INLINEFORM1 is a set of all possible sequences given the character sequence INLINEFORM2 . The following paragraphs explain the encoder, attention decoder, CTC, and joint decoding used in our approach. Data details and experimental setup In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation. 80 dimensional Mel-filterbank (fbank) features are then extracted from the speech samples using a sliding window of size 25 ms with 10ms stride. KALDI toolkit BIBREF24 is used to perform the feature processing. The fbank features are then fed to a seq2seq model with the following configuration: The Bi-RNN BIBREF25 models mentioned above uses a LSTM BIBREF26 cell followed by a projection layer (BLSTMP). In our experiments below, we use only a character-level seq2seq model trained by CTC and attention decoder. Thus in the following experiments we intend to use character error rate (% CER) as a suitable measure to analyze the model performance. However, in section SECREF26 we integrate a character-level RNNLM BIBREF27 with seq2seq model externally and showcase the performance in terms of word error rate (% WER). In this case the words are obtained by concatenating the characters and the space together for scoring with reference words. All experiments are implemented in ESPnet, end-to-end speech processing toolkit BIBREF28 . Multilingual experiments Multilingual approaches used in hybrid RNN/DNN-HMM systems BIBREF10 have been used for for tackling the problem of low-resource data condition. Some of these approaches include language adaptive training and shared layer retraining BIBREF29 . Among them, the most benefited method is the parameter sharing technique BIBREF10 . To incorporate the former approach into encoder, CTC and attention decoder model, we performed the following experiments: Stage 0 - Naive approach In this approach, the model is first trained with 10 multiple languages as denoted in table TABREF14 approximating to 600 hours of training data. data from all languages available during training is used to build a single seq2seq model. The model is trained with a character label set composed of characters from all languages including both train and target set as mentioned in table TABREF14 . The model provides better generalization across languages. Languages with limited data when trained with other languages allows them to be robust and helps in improving the recognition performance. In spite of being simple, the model has limitations in keeping the target language data unseen during training. Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance. Except Pashto, Georgian and Tokpisin, the multilingual VGG-BLSTM model gave 8.8 % absolute gain in average over monolingual model. In case of multilingual BLSTMP, except Pashto and Georgian an absolute gain of 5.0 % in average is observed over monolingual model. Even though the VGG-BLSTM gave improvements, we were not able to perform stage-1 and stage-2 retraining with it due to time constraints. Thus, we proceed further with multilingual BLSTMP model for retraining experiments tabulated below. Stage 1 - Retraining decoder only To alleviate the limitation in the previous approach, the final layer of the seq2seq model which is mainly responsible for classification is retrained to the target language. In previous works BIBREF10 , BIBREF29 related to hybrid DNN/RNN models and CTC based models BIBREF11 , BIBREF14 the softmax layer is only adapted. However in our case, the attention decoder and CTC decoder both have to be retrained to the target language. This means the CTC and attention layers are only updated for gradients during this stage. We found using SGD optimizer with initial learning rate of INLINEFORM0 works better for retraining compared to AdaDelta. The learning rate is decayed in this training at a factor of INLINEFORM0 if there is a drop in validation accuracy. Table TABREF20 shows the performance of simply retraining the last layer using a single target language Assamese. Stage 2 - Finetuning both encoder and decoder Based on the observations from stage-1 model in section SECREF22 , we found that simply retraining the decoder towards a target language resulted in degrading %CER the performance from 45.6 to 61.3. This is mainly due to the difference in distribution across encoder and decoder. So, to alleviate this difference the encoder and decoder is once again retrained or fine-tuned using the model from stage-1. The optimizer used here is SGD as in stage-1, but the initial learning rate is kept to INLINEFORM0 and decayed based on validation performance. The resulting model gave an absolute gain of 1.6% when finetuned a multilingual model after 4th epoch. Also, finetuning a model after 15th epoch gave an absolute gain of 4.3%. To further investigate the performance of this approach across different target data sizes, we split the train set into INLINEFORM0 5 hours, INLINEFORM1 10 hours, INLINEFORM2 20 hours and INLINEFORM3 full set. Since, in this approach the model is only finetuned by initializing from stage-1 model, the model architecture is fixed for all data sizes. Figure FIGREF23 shows the effectiveness of finetuning both encoder and decoder. The gains from 5 to 10 hours was more compared to 20 hours to full set. Table TABREF25 tabulates the % CER obtained by retraining the stage-1 model with INLINEFORM0 full set of target language data. An absolute gain is observed using stage-2 retraining across all languages compared to monolingual model. Multilingual RNNLM In an ASR system, a language model (LM) takes an important role by incorporating external knowledge into the system. Conventional ASR systems combine an LM with an acoustic model by FST giving a huge performance gain. This trend is also shown in general including hybrid ASR systems and neural network-based sequence-to-sequence ASR systems. The following experiments show a benefit of using a language model in decoding with the previous stage-2 transferred models. Although the performance gains in %CER are also generally observed over all target languages, the improvement in %WER was more distinctive. The results shown in the following Fig. FIGREF27 are in %WER. “whole” in each figure means we used all the available data for the target language as full set explained before. We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences. We use all available paired text in the corresponding target language to train the LM for the language. No external text data were used. All language models are trained separately from the seq2seq models. When building dictionary, we combined all the characters over all 15 languages mentioned in table TABREF14 to make them work with transferred models. Regardless of the amount of data used for transfer learning, the RNNLM provides consistent gains across all languages over different data sizes. As explained already, language models were trained separately and used to decode jointly with seq2seq models. The intuition behind it is to use the separately trained language model as a complementary component that works with a implicit language model within a seq2seq decoder. The way of RNNLM assisting decoding follows the equation below: DISPLAYFORM0 INLINEFORM0 is a scaling factor that combines the scores from a joint decoding eq.( EQREF13 ) with RNN-LM, denoted as INLINEFORM1 . This approach is called shallow fusion. Our experiments for target languages show that the gains from adding RNNLM are consistent regardless of the amount of data used for transfer learning. In other words, in Figure FIGREF27 , the gap between two lines are almost consistent over all languages. Also, we observe the gain we get by adding RNN-LM in decoding is large. For example, in the case of assamese, the gain by RNN-LM in decoding with a model retrained on 5 hours of the target language data is almost comparable with the model stage-2 retrained with 20 hours of target language data. On average, absolute gain INLINEFORM0 6% is obtained across all target languages as noted in table TABREF28 . Conclusion In this work, we have shown the importance of transfer learning approach such as stage-2 multilingual retraining in a seq2seq model setting. Also, careful selection of train and target languages from BABEL provide a wide variety in recognition performance (%CER) and helps in understanding the efficacy of seq2seq model. The experiments using character-based RNNLM showed the importance of language model in boosting recognition performance (%WER) over all different hours of target data available for transfer learning. Table TABREF25 and TABREF28 summarizes, the effect of these techniques in terms of %CER and %WER. These methods also show their flexibility in incorporating it in attention and CTC based seq2seq model without compromising loss in performance. Future work We could use better architectures such as VGG-BLSTM as a multilingual prior model before transferring them to a new target language by performing stage-2 retraining. The naive multilingual approach can be improved by including language vectors as input or target during training to reduce the confusions. Also, investigation of multilingual bottleneck features BIBREF30 for seq2seq model can provide better performance. Apart from using the character level language model as in this work, a word level RNNLM can be connected during decoding to further improve %WER. The attention based decoder can be aided with the help of RNNLM using cold fusion approach during training to attain a better-trained model. In near future, we will incorporate all the above techniques to get comparable performance with the state-of-the-art hybrid DNN/RNN-HMM systems.
Train languages are: Cantonese, Bengali, Pashto, Turkish, Vietnamese, Haitian, Tamil, Kurdish, Tokpisin and Georgian, while Assamese, Tagalog, Swahili, Lao are used as target languages.
da82b6dad2edd4911db1dc59e4ccd7f66c5fd79c
da82b6dad2edd4911db1dc59e4ccd7f66c5fd79c_0
Q: What architectures are explored to improve the seq2seq model? Text: Introduction The sequence-to-sequence (seq2seq) model proposed in BIBREF0 , BIBREF1 , BIBREF2 is a neural architecture for performing sequence classification and later adopted to perform speech recognition in BIBREF3 , BIBREF4 , BIBREF5 . The model allows to integrate the main blocks of ASR such as acoustic model, alignment model and language model into a single framework. The recent ASR advancements in connectionist temporal classification (CTC) BIBREF5 , BIBREF4 and attention BIBREF3 , BIBREF6 based approaches has created larger interest in speech community to use seq2seq models. To leverage performance gains from this model as similar or better to conventional hybrid RNN/DNN-HMM models requires a huge amount of data BIBREF7 . Intuitively, this is due to the wide-range role of the model in performing alignment and language modeling along with acoustic to character label mapping at each iteration. In this paper, we explore the multilingual training approaches BIBREF8 , BIBREF9 , BIBREF10 used in hybrid DNN/RNN-HMMs to incorporate them into the seq2seq models. In a context of applications of multilingual approaches towards seq2seq model, CTC is mainly used instead of the attention models. A multilingual CTC is proposed in BIBREF11 , which uses a universal phoneset, FST decoder and language model. The authors also use linear hidden unit contribution (LHUC) BIBREF12 technique to rescale the hidden unit outputs for each language as a way to adapt to a particular language. Another work BIBREF13 on multilingual CTC shows the importance of language adaptive vectors as auxiliary input to the encoder in multilingual CTC model. The decoder used here is a simple INLINEFORM0 decoder. An extensive analysis on multilingual CTC mainly focusing on improving under limited data condition is performed in BIBREF14 . Here, the authors use a word level FST decoder integrated with CTC during decoding. On a similar front, attention models are explored within a multilingual setup in BIBREF15 , BIBREF16 based on attention-based seq2seq to build a model from multiple languages. The data is just combined together assuming the target languages are seen during the training. And, hence no special transfer learning techniques were used here to address the unseen languages during training. The main motivation and contribution behind this work is as follows: Sequence-to-Sequence Model In this work, we use the attention based approach BIBREF1 as it provides an effective methodology to perform sequence-to-sequence (seq2seq) training. Considering the limitations of attention in performing monotonic alignment BIBREF18 , BIBREF19 , we choose to use CTC loss function to aid the attention mechanism in both training and decoding. The basic network architecture is shown in Fig. FIGREF7 . Let INLINEFORM0 be a INLINEFORM1 -length speech feature sequence and INLINEFORM2 be a INLINEFORM3 -length grapheme sequence. A multi-objective learning framework INLINEFORM4 proposed in BIBREF17 is used in this work to unify attention loss INLINEFORM5 and CTC loss INLINEFORM6 with a linear interpolation weight INLINEFORM7 , as follows: DISPLAYFORM0 The unified model allows to obtain both monotonicity and effective sequence level training. INLINEFORM0 represents the posterior probability of character label sequence INLINEFORM1 w.r.t input sequence INLINEFORM2 based on the attention approach, which is decomposed with the probabilistic chain rule, as follows: DISPLAYFORM0 where INLINEFORM0 denotes the ground truth history. Detailed explanations about the attention mechanism is described later. Similarly, INLINEFORM0 represents the posterior probability based on the CTC approach. DISPLAYFORM0 where INLINEFORM0 is a CTC state sequence composed of the original grapheme set and the additional blank symbol. INLINEFORM1 is a set of all possible sequences given the character sequence INLINEFORM2 . The following paragraphs explain the encoder, attention decoder, CTC, and joint decoding used in our approach. Data details and experimental setup In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of conversational telephone speech (CTS) but some scripted recordings and far field recordings are presented as well. Table TABREF14 presents the details of the languages used in this work for training and evaluation. 80 dimensional Mel-filterbank (fbank) features are then extracted from the speech samples using a sliding window of size 25 ms with 10ms stride. KALDI toolkit BIBREF24 is used to perform the feature processing. The fbank features are then fed to a seq2seq model with the following configuration: The Bi-RNN BIBREF25 models mentioned above uses a LSTM BIBREF26 cell followed by a projection layer (BLSTMP). In our experiments below, we use only a character-level seq2seq model trained by CTC and attention decoder. Thus in the following experiments we intend to use character error rate (% CER) as a suitable measure to analyze the model performance. However, in section SECREF26 we integrate a character-level RNNLM BIBREF27 with seq2seq model externally and showcase the performance in terms of word error rate (% WER). In this case the words are obtained by concatenating the characters and the space together for scoring with reference words. All experiments are implemented in ESPnet, end-to-end speech processing toolkit BIBREF28 . Multilingual experiments Multilingual approaches used in hybrid RNN/DNN-HMM systems BIBREF10 have been used for for tackling the problem of low-resource data condition. Some of these approaches include language adaptive training and shared layer retraining BIBREF29 . Among them, the most benefited method is the parameter sharing technique BIBREF10 . To incorporate the former approach into encoder, CTC and attention decoder model, we performed the following experiments: Stage 0 - Naive approach In this approach, the model is first trained with 10 multiple languages as denoted in table TABREF14 approximating to 600 hours of training data. data from all languages available during training is used to build a single seq2seq model. The model is trained with a character label set composed of characters from all languages including both train and target set as mentioned in table TABREF14 . The model provides better generalization across languages. Languages with limited data when trained with other languages allows them to be robust and helps in improving the recognition performance. In spite of being simple, the model has limitations in keeping the target language data unseen during training. Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance. Except Pashto, Georgian and Tokpisin, the multilingual VGG-BLSTM model gave 8.8 % absolute gain in average over monolingual model. In case of multilingual BLSTMP, except Pashto and Georgian an absolute gain of 5.0 % in average is observed over monolingual model. Even though the VGG-BLSTM gave improvements, we were not able to perform stage-1 and stage-2 retraining with it due to time constraints. Thus, we proceed further with multilingual BLSTMP model for retraining experiments tabulated below. Stage 1 - Retraining decoder only To alleviate the limitation in the previous approach, the final layer of the seq2seq model which is mainly responsible for classification is retrained to the target language. In previous works BIBREF10 , BIBREF29 related to hybrid DNN/RNN models and CTC based models BIBREF11 , BIBREF14 the softmax layer is only adapted. However in our case, the attention decoder and CTC decoder both have to be retrained to the target language. This means the CTC and attention layers are only updated for gradients during this stage. We found using SGD optimizer with initial learning rate of INLINEFORM0 works better for retraining compared to AdaDelta. The learning rate is decayed in this training at a factor of INLINEFORM0 if there is a drop in validation accuracy. Table TABREF20 shows the performance of simply retraining the last layer using a single target language Assamese. Stage 2 - Finetuning both encoder and decoder Based on the observations from stage-1 model in section SECREF22 , we found that simply retraining the decoder towards a target language resulted in degrading %CER the performance from 45.6 to 61.3. This is mainly due to the difference in distribution across encoder and decoder. So, to alleviate this difference the encoder and decoder is once again retrained or fine-tuned using the model from stage-1. The optimizer used here is SGD as in stage-1, but the initial learning rate is kept to INLINEFORM0 and decayed based on validation performance. The resulting model gave an absolute gain of 1.6% when finetuned a multilingual model after 4th epoch. Also, finetuning a model after 15th epoch gave an absolute gain of 4.3%. To further investigate the performance of this approach across different target data sizes, we split the train set into INLINEFORM0 5 hours, INLINEFORM1 10 hours, INLINEFORM2 20 hours and INLINEFORM3 full set. Since, in this approach the model is only finetuned by initializing from stage-1 model, the model architecture is fixed for all data sizes. Figure FIGREF23 shows the effectiveness of finetuning both encoder and decoder. The gains from 5 to 10 hours was more compared to 20 hours to full set. Table TABREF25 tabulates the % CER obtained by retraining the stage-1 model with INLINEFORM0 full set of target language data. An absolute gain is observed using stage-2 retraining across all languages compared to monolingual model. Multilingual RNNLM In an ASR system, a language model (LM) takes an important role by incorporating external knowledge into the system. Conventional ASR systems combine an LM with an acoustic model by FST giving a huge performance gain. This trend is also shown in general including hybrid ASR systems and neural network-based sequence-to-sequence ASR systems. The following experiments show a benefit of using a language model in decoding with the previous stage-2 transferred models. Although the performance gains in %CER are also generally observed over all target languages, the improvement in %WER was more distinctive. The results shown in the following Fig. FIGREF27 are in %WER. “whole” in each figure means we used all the available data for the target language as full set explained before. We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences. We use all available paired text in the corresponding target language to train the LM for the language. No external text data were used. All language models are trained separately from the seq2seq models. When building dictionary, we combined all the characters over all 15 languages mentioned in table TABREF14 to make them work with transferred models. Regardless of the amount of data used for transfer learning, the RNNLM provides consistent gains across all languages over different data sizes. As explained already, language models were trained separately and used to decode jointly with seq2seq models. The intuition behind it is to use the separately trained language model as a complementary component that works with a implicit language model within a seq2seq decoder. The way of RNNLM assisting decoding follows the equation below: DISPLAYFORM0 INLINEFORM0 is a scaling factor that combines the scores from a joint decoding eq.( EQREF13 ) with RNN-LM, denoted as INLINEFORM1 . This approach is called shallow fusion. Our experiments for target languages show that the gains from adding RNNLM are consistent regardless of the amount of data used for transfer learning. In other words, in Figure FIGREF27 , the gap between two lines are almost consistent over all languages. Also, we observe the gain we get by adding RNN-LM in decoding is large. For example, in the case of assamese, the gain by RNN-LM in decoding with a model retrained on 5 hours of the target language data is almost comparable with the model stage-2 retrained with 20 hours of target language data. On average, absolute gain INLINEFORM0 6% is obtained across all target languages as noted in table TABREF28 . Conclusion In this work, we have shown the importance of transfer learning approach such as stage-2 multilingual retraining in a seq2seq model setting. Also, careful selection of train and target languages from BABEL provide a wide variety in recognition performance (%CER) and helps in understanding the efficacy of seq2seq model. The experiments using character-based RNNLM showed the importance of language model in boosting recognition performance (%WER) over all different hours of target data available for transfer learning. Table TABREF25 and TABREF28 summarizes, the effect of these techniques in terms of %CER and %WER. These methods also show their flexibility in incorporating it in attention and CTC based seq2seq model without compromising loss in performance. Future work We could use better architectures such as VGG-BLSTM as a multilingual prior model before transferring them to a new target language by performing stage-2 retraining. The naive multilingual approach can be improved by including language vectors as input or target during training to reduce the confusions. Also, investigation of multilingual bottleneck features BIBREF30 for seq2seq model can provide better performance. Apart from using the character level language model as in this work, a word level RNNLM can be connected during decoding to further improve %WER. The attention based decoder can be aided with the help of RNNLM using cold fusion approach during training to attain a better-trained model. In near future, we will incorporate all the above techniques to get comparable performance with the state-of-the-art hybrid DNN/RNN-HMM systems.
VGG-BLSTM, character-level RNNLM
cf15c4652e23829d8fb4cf2a25e64408c18734c1
cf15c4652e23829d8fb4cf2a25e64408c18734c1_0
Q: Why is this work different from text-only UNMT? Text: Introduction Our long-term goal is to build intelligent systems that can perceive their visual environment and understand the linguistic information, and further make an accurate translation inference to another language. Since image has become an important source for humans to learn and acquire knowledge (e.g. video lectures, BIBREF1 , BIBREF2 , BIBREF3 ), the visual signal might be able to disambiguate certain semantics. One way to make image content easier and faster to be understood by humans is to combine it with narrative description that can be self-explainable. This is particularly important for many natural language processing (NLP) tasks as well, such as image caption BIBREF4 and some task-specific translation–sign language translation BIBREF5 . However, BIBREF6 demonstrates that most multi-modal translation algorithms are not significantly better than an off-the-shelf text-only machine translation (MT) model for the Multi30K dataset BIBREF7 . There remains an open question about how translation models should take advantage of visual context, because from the perspective of information theory, the mutual information of two random variables $I(X,Y)$ will always be no greater than $I(X;Y,Z)$ , due to the following fact. $$& I(X;Y,Z) - I(X;Y) \nonumber \\ =& KL(p(X,Y,Z)\Vert p(X|Y)p(Z|Y)p(Y)) $$ (Eq. 1) where the Kullback-Leibler (KL) divergence is non-negative. This conclusion makes us believe that the visual content will hopefully help the translation systems. Since the standard paradigm of multi-modal translation always considers the problem as a supervised learning task, the parallel corpus is usually sufficient to train a good translation model, and the gain from the extra image input is very limited. Moreover, the scarcity of the well formed dataset including both images and the corresponding multilingual text descriptions is also another constraint to prevent the development of more scaled models. In order to address this issue, we propose to formulate the multi-modal translation problem as an unsupervised learning task, which is closer to real applications. This is particularly important given the massive amounts of paired image and text data being produced everyday (e.g., news title and its illustrating picture). Our idea is originally inspired by the text-only unsupervised MT (UMT) BIBREF8 , BIBREF9 , BIBREF0 , investigating whether it is possible to train a general MT system without any form of supervision. As BIBREF0 discussed, the text-only UMT is fundamentally an ill-posed problem, since there are potentially many ways to associate target with source sentences. Intuitively, since the visual content and language are closely related, the image can play the role of a pivot “language" to bridge the two languages without paralleled corpus, making the problem “more well-defined" by reducing the problem to supervised learning. However, unlike the text translation involving word generation (usually a discrete distribution), the task to generate a dense image from a sentence description itself is a challenging problem BIBREF10 . High quality image generation usually depends on a complicated or large scale neural network architecture BIBREF11 , BIBREF12 , BIBREF13 . Thus, it is not recommended to utilize the image dataset as a pivot “language" BIBREF14 . Motivated by the cycle-consistency BIBREF15 , we tackle the unsupervised translation with a multi-modal framework which includes two sequence-to-sequence encoder-decoder models and one shared image feature extractor. We don't introduce the adversarial learning via a discriminator because of the non-differentiable $\arg \max $ operation during word generation. With five modules in our framework, there are multiple data streaming paths in the computation graph, inducing the auto-encoding loss and cycle-consistency loss, in order to achieve the unsupervised translation. Another challenge of unsupervised multi-modal translation, and more broadly for general multi-modal translation tasks, is the need to develop a reasonable multi-source encoder-decoder model that is capable of handling multi-modal documents. Moreover, during training and inference stages, it is better to process the mixed data format including both uni-modal and multi-modal corpora. First, this challenge highly depends on the attention mechanism across different domains. Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) are naturally suitable to encode the language text and visual image respectively; however, encoded features of RNN has autoregressive property which is different from the local dependency of CNN. The multi-head self-attention transformer BIBREF16 can mimic the convolution operation, and allow each head to use different linear transformations, where in turn different heads can learn different relationships. Unlike RNN, it reduces the length of the paths of states from the higher layer to all states in the lower layer to one, and thus facilitates more effective learning. For example, the BERT model BIBREF17 , that is completely built upon self-attention, has achieved remarkable performance in 11 natural language tasks. Therefore, we employ transformer in both the text encoder and decoder of our model, and design a novel joint attention mechanism to simulate the relationships among the three domains. Besides, the mixed data format requires the desired attention to support the flexible data stream. In other words, the batch fetched at each iteration can be either uni-modal text data or multi-modal text-image paired data, allowing the model to be adaptive to various data during inference as well. Succinctly, our contributions are three-fold: (1) We formuate the multi-modal MT problem as unsupervised setting that fits the real scenario better and propose an end-to-end transformer based multi-modal model. (2) We present two technical contributions: successfully train the proposed model with auto-encoding and cycle-consistency losses, and design a controllable attention module to deal with both uni-modal and multi-modal data. (3) We apply our approach to the Multilingual Multi30K dataset in English $\leftrightarrow $ French and English $\leftrightarrow $ German translation tasks, and the translation output and the attention visualization show the gain from the extra image is significant in the unsupervised setting. Related Work We place our work in context by arranging several prior popular topics, along the the axes of UMT, image caption and multi-modal MT. Unsupervised Machine Translation Existing methods in this area BIBREF18 , BIBREF9 , BIBREF0 are mainly modifications of encoder-decoder schema. Their key ideas are to build a common latent space between the two languages (or domains) and to learn to translate by reconstructing in both domains. The difficulty in multi-modal translation is the involvement of another visual domain, which is quite different from the language domain. The interaction between image and text are usually not symmetric as two text domains. This is the reason why we take care of the attention module cautiously. Image Caption Most standard image caption models are built on CNN-RNN based encoder-decoder framework BIBREF19 , BIBREF4 , where the visual features are extracted from CNN and then fed into RNN to output word sequences as captions. Since our corpora contain image-text paired data, our method also draws inspiration from image caption modeling. Thus, we also embed the image-caption model within our computational graph, whereas the transformer architecture is adopted as a substitution for RNN. Multi-modal Machine Translation This problem is first proposed by BIBREF6 on the WMT16 shared task at the intersection of natural language processing and computer vision. It can be considered as building a multi-source encoder on top of either MT or image caption model, depending on the definition of extra source. Most Multi-modal MT research still focuses on the supervised setting, while BIBREF14 , BIBREF20 , to our best knowledge, are the two pioneering works that consider generalizing the Multi-modal MT to an unsupervised setting. However, their setup puts restrictions on the input data format. For example, BIBREF14 requires the training data to be image text pair but the inference data is text-only input, and BIBREF20 requires image text pair format for both training and testing. These limit the model scale and generalization ability, since large amount of monolingual corpora is more available and less expensive. Thus, in our model, we specifically address this issue with controllable attention and alternative training scheme. Methodology In this section we first briefly describe the main MT systems that our method is built upon and then elaborate on our approach. Neural Machine Translation If a bilingual corpus is available, given a source sentence $\mathbf {x}=(x_1,...,x_n)$ of $n$ tokens, and a translated target sentence $\mathbf {y}=(y_1,...,y_m)$ of $m$ tokens, where $(\mathbf {x}, \mathbf {y})\in \mathcal {X}\times \mathcal {Y}$ , the NMT model aims at maximizing the likelihood, $$p(\mathbf {y}|\mathbf {x}) = \sum _{t=1}^m p(y_t|\mathbf {y}_{<t}, \mathbf {x}) .$$ (Eq. 4) The attention based sequence-to-sequence encoder-decoder architecture BIBREF21 , BIBREF22 , BIBREF23 , BIBREF16 is usually employed to parameterize the above conditional probability. The encoder reads the source sentence and outputs the hidden representation vectors for each token, $\lbrace \mathbf {h}_1^e,...,\mathbf {h}_n^e\rbrace = \text{Enc}_x(\mathbf {x})$ . The attention based decoder is defined in a recurrent way. Given the decoder has the summarized representation vector $\mathbf {h}_{t}^d=\text{Dec}_y(\mathbf {y}_{<t}, \mathbf {x})$ at time stamp $t$ , the model produces a context vector $\mathbf {c}_t=\sum _{j=1}^n \alpha _i\mathbf {h}_j^e$ based on an alignment model, $\lbrace \alpha _1,...,\alpha _n\rbrace = \text{Align}(\mathbf {h}_{t}^d, \lbrace \mathbf {h}_1^e,...,\mathbf {h}_n^e\rbrace )$ , such that $\sum _{j=1}^n\alpha _j=1$ . Therefore, the conditional probability to predict the next token can be written as, $$p(y_t|\mathbf {y}_{<t}, \mathbf {x}) = \text{softmax}(g(\mathbf {c}_t, y_{t-1}, \mathbf {h}_{t-1}^d)) .$$ (Eq. 5) in which $g(\cdot )$ denotes a non-linear function extracting features to predict the target. The encoder and decoder model described here is in a general formulation, not constrained to be RNN BIBREF21 or transformer architecture BIBREF16 . Multi-modal Neural Machine Translation In this task, an image $\mathbf {z}$ and the description of the image in two different languages form a triplet $(\mathbf {x}, \mathbf {y}, \mathbf {z})\in \mathcal {X}\times \mathcal {Y}\times \mathcal {I}$ . Thus, the problem naturally becomes maximizing the new likelihood $p(\mathbf {y}|\mathbf {x},\mathbf {z})$ . Though the overall framework of such a translation task is still the encoder-decoder architecture, the detailed feature extractor and attention module can vary greatly, due to the extra source image. The traditional approach BIBREF6 , BIBREF24 is to encode the source text and the image separately and combine them at the high level features, where the image feature map can be represented as $\lbrace \mathbf {h}_{1}^i,...,\mathbf {h}_k^i\rbrace =\text{Enc}_z(\mathbf {z})$ and $\text{Enc}_z$ is usually a truncated image classification model, such as Resnet BIBREF25 . Notice that unlike the number of the text features is exactly the number of tokens in the source, the number of the image features $k$ depends on the last layer in the truncated network. Then, the context vector is computed via an attention model, $$\mathbf {c}_t=\text{Attention}(\mathbf {h}_{t}^d, \lbrace \mathbf {h}_1^e,...,\mathbf {h}_n^e\rbrace , \lbrace \mathbf {h}_{1}^i,...,\mathbf {h}_k^i\rbrace )$$ (Eq. 8) Since three sets of features appear in Eq ( 8 ), there are more options of the attention mechanism than text-only NMT. The decoder can remain the same in the recurrent fashion. Unsupervised Learning The unsupervised problem requires a new problem definition. On both the source and the target sides, only monolingual documents are presented in the training data, i.e., the data comes in the paired form of $(\mathbf {x}, \mathbf {z})\in \mathcal {X}\times \mathcal {I}$ and $(\mathbf {y}, \mathbf {z})\in \mathcal {Y}\times \mathcal {I}$ . The triplet data format is no longer available. The purpose is to learn a multi-modal translation model $\mathcal {X}\times \mathcal {I} \rightarrow \mathcal {Y}$ or a text-only one $\mathcal {X} \rightarrow \mathcal {Y}$ . Note there is no explicit paired information cross two languages, making it impossible to straightforwardly optimize the supervised likelihood. Fortunately, motivated by the CycleGAN BIBREF15 and the dual learning in BIBREF26 , we can actually learn the translation model for both directions between the source and the target in an unsupervised way. Additionally, we can even make the multi-modal and uni-modal inference compatible with deliberate fine-tuning strategy. Auto-Encoding Loss As Figure 2 illustrates, there are five main modules in the overall architecture, two encoders and two decoders for the source and target languages, and one extra image encoder. Since the lack of triplet data, we can only build the first two following denoised auto-encoding losses without involving the paired $\mathbf {x}$ and $\mathbf {y}$ , $$\mathcal {L}_{\text{auto}}(\mathbf {x}, \mathbf {z}) = SCE(\text{Dec}_{x}(\text{Enc}_{x}(\mathbf {x}), \text{Enc}_{z}(\mathbf {z})), \mathbf {x}) \\ \mathcal {L}_{\text{auto}}(\mathbf {y}, \mathbf {z}) = SCE(\text{Dec}_{y}(\text{Enc}_{y}(\mathbf {y}), \text{Enc}_{z}(\mathbf {z})), \mathbf {x}) $$ (Eq. 11) where $SCE(\cdot ,\cdot )$ represents sequential cross-entropy loss. We use “denoised" loss here, because the exact auto-encoding structure will likely force the language model learning a word-to-word copy network. The image is seemingly redundant since the text input contains the entire information for recovery. However, it is not guaranteed that our encoder is lossless, so the image is provided as an additional supplement to reduce the information loss. Cycle-Consistency Loss The auto-encoding loss can, in theory, learn two functional mappings $\mathcal {X}\times \mathcal {I}\rightarrow \mathcal {X}$ and $\mathcal {Y}\times \mathcal {I}\rightarrow \mathcal {Y}$ via the supplied training dataset. However, the two mappings are essentially not our desiderata, even though we can switch the two decoders to build our expected mappings, e.g., $\mathcal {X}\times \mathcal {I}\rightarrow \mathcal {Y}$ . The crucial problem is that the transferred mappings achieved after switching decoders lack supervised training, since no regularization pushes the latent encoding spaces aligned between the source and target. We argue that this issue can be tackled by another two cycle-consistency properties (note that we use the square brackets $[]$ below to denote the inference mode, meaning no gradient back-propagation through such operations), $$\text{Dec}_{x}(\text{Enc}_{y}(\text{Dec}_{y}[\text{Enc}_{x}(\mathbf {x}), \text{Enc}_{z}(\mathbf {z})]), \text{Enc}_{z}(\mathbf {z})) \approx \mathbf {x} \\ \text{Dec}_{y}(\text{Enc}_{x}(\text{Dec}_{x}[\text{Enc}_{y}(\mathbf {y}), \text{Enc}_{z}(\mathbf {z})]), \text{Enc}_{z}(\mathbf {z})) \approx \mathbf {y} $$ (Eq. 13) The above two properties seem complicated, but we will decompose them step-by-step to see its intuition, which are also the key to make the auto-encoders translation models across different languages. Without loss of generality, we use Property ( 13 ) as our illustration, where the same idea is applied to (). After encoding the information from source and image as the high level features, the encoded features are fed into the decoder of another language (i.e. target language), thus obtaining an inferred target sentence, $$\tilde{\mathbf {y}} = F_{xz\rightarrow y}(\mathbf {x},\mathbf {z}) \triangleq \text{Dec}_{y}[\text{Enc}_{x}(\mathbf {x}), \text{Enc}_{z}(\mathbf {z})] .$$ (Eq. 14) Unfortunately, the ground truth $\mathbf {y}$ corresponding to the input $\mathbf {x}$ or $\mathbf {z}$ is unknown, so we cannot train $F_{xz\rightarrow y}$ at this time. However, since $\mathbf {x}$ is the golden reference, we can construct the pseudo supervised triplet $(\mathbf {x}, \tilde{\mathbf {y}}, \mathbf {z})$ as the augmented data to train the following model, $$F_{yz\rightarrow x}(\tilde{\mathbf {y}},\mathbf {z}) \triangleq \text{Dec}_{x}(\text{Enc}_{y}(\tilde{\mathbf {y}}), \text{Enc}_{z}(\mathbf {z})).$$ (Eq. 15) Note that the pseudo input $\tilde{\mathbf {y}}$ can be considered as the corrupted version of the unknown $\mathbf {y}$ . The noisy training step makes sense because injecting noise to the input data is a common trick to improve the robustness of model even for traditional supervised learning BIBREF27 , BIBREF28 . Therefore, we incentivize this behavior using the cycle-consistency loss, $$\mathcal {L}_{\text{cyc}}(\mathbf {x},\mathbf {z}) = SCE(F_{yz\rightarrow x}(F_{xz\rightarrow y}(\mathbf {x},\mathbf {z}),\mathbf {z}), \mathbf {x}) .$$ (Eq. 16) This loss indicates the cycle-consistency $(\ref {eq:cycle1})$ , and the mapping $\mathcal {Y}\times \mathcal {I}\rightarrow \mathcal {X}$ can be successfully refined. Controllable Attention In additional to the loss function, another important interaction between the text and image domain should focus on the decoder attention module. In general, we proposal to extend the traditional encoder-decoder attention to a multi-domain attention. $$\mathbf {c}_t = \text{Att}(\mathbf {h}_t^d, \mathbf {h}^e) + \lambda _1\text{Att}(\mathbf {h}_t^d, \mathbf {h}^i) + \lambda _2 \text{Att}(\mathbf {h}_t^d, \mathbf {h}^e, \mathbf {h}^i)$$ (Eq. 18) where $\lambda _1$ and $\lambda _2$ can be either 1 or 0 during training, depending on whether the fetched batch includes image data or not. For example, we can easily set up a flexible training scheme by alternatively feeding the monolingual language data and text-image multimodal data to the model. A nice byproduct of this setup allows us to successfully make a versatile inference with or without image, being more applicable to real scenarios. In practice, we utilize the recent developed self-attention mechanism BIBREF16 as our basic block, the hidden states contain three sets of vectors $Q, K, V$ , representing queries, keys and values. Therefore, our proposed context vector can be rewritten as, $$\mathbf {c}_t &= \text{softmax}\left(\frac{Q_t^d (K^e)^\top }{\sqrt{d}}\right)V^e + \lambda _1 \text{softmax}\left(\frac{Q_t^d (K^i)^\top }{\sqrt{d}}\right)V^i \nonumber \\ &+ \lambda _2 \text{softmax}\left(\frac{Q_t^d (K^{ei})^\top }{\sqrt{d}}\right)V^{ei} \nonumber \\ &+ \lambda _2 \text{softmax}\left(\frac{Q_t^d (K^{ie})^\top }{\sqrt{d}}\right)V^{ie}$$ (Eq. 19) where $d$ is the dimensionality of keys, and $[K^{ei}, V^{ei}]=\text{FFN}\left(\text{softmax}\left(\frac{Q^e(K^i)^\top }{\sqrt{d}}\right)V^i \right)$ means the attention from text input to image input, and $[K^{ie}, V^{ie}]$ represents the symmetric attention in the reverse direction. Note the notation $Q^e$ has no subscript and denotes as a matrix, indicating the softmax is row-wise operation. In practice, especially for Multi30K dataset, we found $\lambda _2$ is less important and $\lambda _2=0$ brings no harm to the performance. Thus, we always set it as 0 in our experiments, but non-zero $\lambda _2$ may be helpful in other cases. Training and Testing on Multi30K We evaluate our model on Multi30K BIBREF7 2016 test set of English $\leftrightarrow $ French (En $\leftrightarrow $ Fr) and English $\leftrightarrow $ German (En $\leftrightarrow $ De) language pairs. This dataset is a multilingual image caption dataset with 29000 training samples of images and their annotations in English, German, French BIBREF29 and Czech BIBREF30 . The validation set and test set have 1014 and 1000 samples respectively. To ensure the model never sees any paired sentences information (which is an unlikely scenario in practice), we randomly split half of the training and validation sets for one language and use the complementary half for the other. The resulting corpora is denoted as M30k-half with 14500 and 507 training and validation samples respectively. To find whether the image as additional information used in the training and/or testing stage can bring consistent performance improvement, we train our model in two different ways, each one has train with text only (-txt) and train with text+image (-txt-img) modes. We would compare the best performing training method to the state-of-the-art, and then do side-by-side comparison between them: Pre-large (P): To leverage the controllable attention mechanism for exploring the linguistic information in the large monolingual corpora, we create text only pre-training set by combining the first 10 million sentences of the WMT News Crawl datasets from 2007 to 2017 with 10 times M30k-half. This ends up in a large text only dataset of 10145000 unparalleled sentences in each language. P-txt: We would then pre-train our model without the image encoder on this dataset and use the M30k-half validation set for validation. P-txt-img: Once the text-only model is pre-trained, we then use it for the following fine-tuning stage on M30k-half. Except for the image encoder, we initialize our model with the pre-trained model parameters. The image encoder uses pre-trained ResNet-152 BIBREF25 . The error gradient does not back-propagate to the original ResNet network. Scratch (S): We are also curious about the role of image can play when no pre-training is involved. We train from scratch using text only (S-txt) and text with corresponding image (S-txt-img) on M30k-half. Implementation Details and Baseline Models The text encoder and decoder are both 4 layers transformers with dimensionality 512, and for the related language pair, we share the first 3 layers of transformer for both encoder and decoder. The image encoder is the truncated ResNet-152 with output layer res4b35_relu, and the parameters of ResNet are freezing during model optimization. Particularly, the feature map $14\times 14\times 1024$ of layer res4b35_relu is flattened to $196\times 1024$ so that its dimension is consistent with the sequential text encoder output. The actual losses ( 11 ) and () favor a standard denoising auto-encoders: the text input is perturbed with deletion and local permutation; the image input is corrupted via dropout. We use the same word preprocessing techniques (Moses tokenization, BPE, binarization, fasttext word embedding on training corpora, etc.) as reported in BIBREF0 , please refer to the relevant readings for further details. We would like to compare the proposed UMNMT model to the following UMT models. MUSE BIBREF8 : It is an unsupervised word-to-word translation model. The embedding matrix is trained on large scale wiki corpora. Game-NMT BIBREF14 : It is a multi-modal zero-source UMT method trained using reinforcement learning. UNMT-text BIBREF9 : It is a mono-modal UMT model which only utilize text data and it is pretrained on synthetic paired data generated by MUSE. Benchmarking with state-of-the-art In this section, we report the widely used BLEU score of test dataset in Table 1 for different MT models. Our best model has achieved the state-of-the-art performance by leading more than 6 points in En $\rightarrow $ Fr task to the second best. Some translation examples are shown in Figure 4 . There is also close to 1 point improvement in the En $\rightarrow $ De task. Although pre-training plays a significant role to the final performance, the image also contributes more than 3 points in case of training from scratch (S-txt vs. S-txt-img), and around 2 points in case of fine tuning (P-txt vs. P-txt-img). Interestingly, it is observed that the image contributes less performance improvement for pre-training than training from scratch. This suggests that there is certain information overlap between the large monolingual corpus and the M30k-half images. We also compare the Meteor, Rouge, CIDEr score in Table 2 and validation BLEU in Figure 3 to show the consistent improvement brought by using images. Analysis In this section, we would shed more light on how and why images can help for unsupervised MT. We would first visualize which part of the input image helps the translation by showing the heat map of the transformer attention. We then show that image not only helps the translation by providing more information in the testing stage, it can also act as a training regularizer by guiding the model to converge to a better local optimal point in the training stage. To visualize the transformer's attention from regions in the input image to each word in the translated sentences, we use the scaled dot-production attention of the transformer decoder's multi-head attention block as shown in Figure 2 , more specifically, it is the $\text{softmax}\big ( \frac{Q^d_t (K^i)^\text{T}}{\sqrt{d}}\big )$ . This is a matrix of shape $l_T\times l_S$ , where $l_T$ is the translated sentence length and $l_S$ is the source length. Since we flatten the 14 $\times $ 14 matrix from the ResNet152, the $l_S = 196$ . A heat map for the $j$ th word in the translation is then generated by mapping the value of $k$ th entry in $\lbrace c_i[j,k]\rbrace ^{196}_{k=1}$ to their receptive field in the original image, averaging the value in the overlapping area and then low pass filtering. Given this heat map, we would visualize it in two ways: (1) We overlay the contour of the heat-map with the original image as shown in the second, and fifth rows of Figure 5 and the second row of Figure 6 ; (2) We normalize the heat map between 0 and 1, and then multiply it with each color channel of the input image pixel-wise as shown in the third and sixth rows of Figure 5 and in the third row of Figure 6 . We visualize the text attention by simply plotting the text attention matrix $\text{softmax}\big ( \frac{Q^d_t (K^e)^\text{T}}{\sqrt{d}}\big )$ in each transformer decoder layer as shown in “Text decoder attention by layers" in these two figures. Figure 5 shows two positive examples that when transformer attends to the right regions of the image like “orange", “chapeau", or “humme" (interestingly, the nose) in the upper image or “bleu", “t-shirt", “blanc" or “short" in the lower image. Whereas in Figure 6 , transformer attends to the whole image and treat it as a pond instead of focusing on the region where a twig exists. As a result, the twig was mistook as pond. For the text attention, we can see the text heat map becomes more and more diagonal as the decoder layer goes deeper in both figures. This indicates the text attention gets more and more focused since the English and French have similar grammatical rules. As shown in Equation 1 , the model would certainly get more information when image is present in the inferencing stage, but can images be helpful if they are used in the training stage but not readily available during inferencing (which is a very likely scenario in practice)? Table 3 shows that even when images are not used, the performance degradation are not that significant (refer to Row 6-8 in Table 1 for comparison) and the trained with image model still outperforms the trained with text only model by quite a margin. This suggests that images can serve as additional information in the training process, thus guiding the model to converge to a better local optimal point. Such findings also verify the proposed controllable attention mechanism. This indicates the requirement of paired image and monolingual text in the testing stage can be relaxed to feeding the text-only data if paired image or images are not available. To show that images help MT by aligning different languages with similar meanings, we also train the UMNMT model on the whole Multi30K dataset where the source and target sentences are pretended unparalleled (i.e., still feed the image text pairs to model). By doing this, we greatly increase the sentences in different languages of similar meanings, if images can help align those sentences, then the model should be able to learn better than the model trained with text only. We can see from Table 4 that the performance increase by using images far outstrip the model trained on text only data, in the case of En $\rightarrow $ Fr, the P-txt-img has more than 4 points gain than the P-txt. Conclusion In this work, we proposed a new unsupervised NMT model with multi-modal attention (one for text and one for image) which is trained under an auto-encoding and cycle-consistency paradigm. Our experiments showed that images as additional information can significantly and consistently improve the UMT performance. This justifies our hypothesis that the utilization of the multi-modal data can increase the mutual information between the source sentences and the translated target sentences. We have also showed that UMNMT model trained with images can achieve a better local optimal point and can still achieve better performance than trained with text-only model even if images are not available in the testing stage. Overall, our work pushes unsupervised machine translation more applicable to the real scenario.
the image can play the role of a pivot “language" to bridge the two languages without paralleled corpus
439af1232a012fc4d94ef2ffe305dd405bee3888
439af1232a012fc4d94ef2ffe305dd405bee3888_0
Q: What is baseline used? Text: Introduction Most languages, even with millions of speakers, have not been the center for natural language processing and are counted as low-resource for tasks like named entity recognition (NER). Similarly, even for high-resource languages, there exists only few labeled data for most entity types beyond person, location and organization. Distantly- or weakly-supervised approaches have been proposed to solve this issue, e.g., by using lists of entities for labeling raw text BIBREF0, BIBREF1. This allows obtaining large amounts of training data quickly and cheaply. Unfortunately, these labels often contain errors and learning with this noisily-labeled data is difficult and can even reduce overall performance (see, e.g. BIBREF2). A variety of ideas have been proposed to overcome the issues of noisy training data. One popular approach is to estimate the relation between noisy and clean, gold-standard labels and use this noise model to improve the training procedure. However, most of these approaches only assume a dependency between the labels and do not take the features into account when modeling the label noise. This may disregard important information. The global confusion matrix BIBREF3 is a simple model which assumes that the errors in the noisy labels just depend on the clean labels. Our contributions are as follows: We propose to cluster the input words with the help of additional, unlabeled data. Based on this partition of the feature space, we obtain different confusion matrices that describe the relationship between clean and noisy labels. We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines. Related Work A popular approach is modeling the relationship between noisy and clean labels, i.e., estimating $p(\hat{y}|y)$ where $y$ is the clean and $\hat{y}$ the noisy label. For example, this can be represented as a noise or confusion matrix between the clean and the noisy labels, as explained in Section SECREF3. Having its roots in statistics BIBREF4, this or similar ideas have been recently studied in NLP BIBREF2, BIBREF3, BIBREF5, image classification BIBREF6, BIBREF7, BIBREF8 and general machine learning settings BIBREF9, BIBREF10, BIBREF11. All of these methods, however, do not take the features into account that are used to represent the instances during classification. In BIBREF12 only the noise type depends on $x$ but not the actual noise model. BIBREF13 and BIBREF14 use the learned feature representation $h$ to model $p(\hat{y}|y,h(x))$ for image classification and relation extraction respectively. In the work of BIBREF15, $p(y|\hat{y},h(x))$ is estimated to clean the labels for an image classification task. The survey by BIBREF16 gives a detailed overview about other techniques for learning in the presence of noisy labels. Specific to learning noisy sequence labels in NLP, BIBREF2 used a combination of clean and noisy data for low-resource POS tagging. BIBREF17 suggested partial annotation learning to lessen the effects of incomplete annotations and reinforcement learning for filtering incorrect labels for Chinese NER. BIBREF3 used a confusion matrix and proposed to leverage pairs of clean and noisy labels for its initialization, evaluating on English NER. For English NER and Chunking, BIBREF5 also used a confusion matrix but learned it with an EM approach and combined it with multi-task learning. Recently, BIBREF18 studied input from different, unreliable sources and how to combine them for NER prediction. Global Noise Model We assume a low-resource setting with a small set of gold standard annotated data $C$ consisting of instances with features $x$ and corresponding, clean labels $y$. Additionally, a large set of noisy instances $(x,\hat{y}) \in N$ is available. This can be obtained e.g. from weak or distant supervision. In a multi-class classification setting, we can learn the probability of a label $y$ having a specific class given the feature $x$ as where $k$ is the number of classes, $h$ is a learned, non-linear function (in our case a neural network) and $u$ is the softmax weights. This is our base model trained on $C$. Due to the errors in the labels, the clean and noisy labels have different distributions. Therefore, learning on $C$ and $N$ jointly can be detrimental for the performance of predicting unseen, clean instances. Nevertheless, the noisy-labeled data is still related to $C$ and can contain useful information that we want to successfully leverage. We transform the predicted (clean) distribution of the base model to the noisy label distribution The relationship is modeled using a confusion matrix (also called noise or transformation matrix or noise layer) with learned weights $b_{ij}$: The overall architecture is visualized in Figure FIGREF4. An important question is how to initialize this noise layer. As proposed by BIBREF3, we apply the same distant supervision technique used to obtain $N$ from unlabeled data on the already labeled instances in $C$. We thus obtain pairs of clean $y$ and corresponding noisy labels $\hat{y}$ for the same instances and the weights of the noise layer can be initialized as Following the naming by BIBREF14, we call this the global noise model. Feature Dependent Noise Model The global confusion matrix is a simple model which assumes that the errors in the noisy labels depend on the clean labels. An approach that also takes the corresponding features $x$ into account can model more complex relations. BIBREF15 and BIBREF14 use multiple layers of a neural network to model these relationships. However, in low resource settings with only small amounts of clean, supervised data, these more complex models can be difficult to learn. In contrast to that, larger amounts of unlabeled text are usually available even in low-resource settings. Therefore, we propose to use unsupervised clustering techniques to partition the feature space of the input words (and the corresponding instances) before estimating the noise matrices. To create the clusters, we use either Brown clustering BIBREF19 on the input words or $k$-means clustering BIBREF20 on the pretrained word embeddings after applying PCA BIBREF21. In sequence labeling tasks, the features $x$ of an instance usually consist of the input word $\iota (x)$ and its context. Given a clustering $\Pi $ over the input words $\lbrace \iota (x) \mid (x,y) \in C \cup N \rbrace $ consisting of clusters $\Pi _1, ..., \Pi _p$, we can group all clean and noisy instances into groups For each group, we construct an independent confusion matrix using Formulas DISPLAY_FORM3 and DISPLAY_FORM5. The prediction of the noisy label $\hat{y}$ (Formula DISPLAY_FORM2) then becomes Since the clustering is performed on unsupervised data, in low-resource settings, the size of an actual group of instances $G_q$ can be very small. If the number of members in a group is insufficient, the estimation of reliable noise matrices is difficult. This issue can be avoided by only using the largest groups and creating a separate group for all other instances. To make use of all the clusters, we alternatively propose to interpolate between the global and the group confusion matrix: The interpolation hyperparameter $\lambda $ (with $0 \le \lambda \le 1$) regulates the influence from the global matrix on the interpolated matrix. The selection of the largest groups and the interpolation can also be combined. Experiments We evaluate all models in five low-resource NER settings across different languages. Although the evaluation is performed for NER labeling, the proposed models are not restricted to the task of NER and can potentially be used for other tasks. Experiments ::: Models We follow the BiLSTM architecture from BIBREF3. Only the optimizer was changed for all models to NADAM BIBREF22 as this helped with convergence problems for increasing cluster numbers. The Base is trained only on clean data while Base+Noise is trained on both the clean and the noisy data without noise handling. Global-CM uses a global confusion matrix for all noisy instances to model the noise as proposed by BIBREF3 and presented in Section SECREF3. The same architecture is used for Global-ID-CM, but the confusion matrix is initialized with the identity matrix (instead of Formula DISPLAY_FORM5) and only adapted during training. The cluster-based models we propose in Section SECREF4 are Brown-CM and K-Means-CM. We experimented with numbers of clusters of 5, 10, 25 and 50. The models that select only the largest groups $G$ are marked as *-Freq and select either 30% or 50% of the clusters. The interpolation models have the postfix *-IP with $\lambda \in \lbrace 0.3, 0.5, 0.7\rbrace $ . The combination of both is named *-Freq-IP. As for all other hyperparameters, the choice was taken on the development set. We implemented the Cleaning BIBREF15 and Dynamic-CM BIBREF14 models. Both were not developed for sequence labeling tasks and therefore needed to be adapted. For the Cleaning model, we followed the instructions by BIBREF3. The embedding and prediction components of the Dynamic-CM model were replaced according to our base model. The output of the dense layer was used as input to the dynamic matrix generation. We experimented with and without their proposed trace loss. The training for all models was performed with labels in the IO format. The predicted labels for the test data were converted and evaluated in IOB2 with the official CoNLL evaluation script. The IOB2 format would increase matrix size making the confusion matrix estimation more difficult without adding much information in practice. In preliminary experiments, this decreased performance in particular for low-resource settings. Experiments ::: Data The models were tested on the four CoNLL datasets for English, German, Spanish and Dutch BIBREF23, BIBREF24 using the standard split, and the Estonian data from BIBREF25 using a 10/10/80 split for dev/test/train sets. For each language, the labels of 1% of the training data (ca. 2100 instances) were used to obtain a low-resource setting. We treat this as the clean data $C$. The rest of the (now unlabeled) training data was used for the automatic annotation which we treat as noisily labeled data $N$. We applied the distant supervision method by BIBREF1, which uses lists and gazetteer information for NER labeling. As seen in Table TABREF19, this method reaches rather high precision but has a poor recall. The development set of the original dataset is used for model-epoch and hyperparameter selection, and the results are reported on the complete, clean test set. The words were embedded with the pretrained fastText vectors BIBREF26. The clusters were calculated on the unlabeled version of the full training data. Additionally, the Brown clusters used the language-specific documents from the Europarl corpus BIBREF27. Experimental Results The results of all models are shown in Table TABREF9. The newly proposed cluster-based models achieve the best performance across all languages and outperform all other models in particular for Dutch and English. The combination of interpolation with the global matrix and the selection of large clusters is almost always beneficial compared to the cluster-based models using only one of the methods. In general, both clustering methods achieve similar performance in combination with interpolation and selection, except for English, where Brown clustering performs worse than $k$-Means clustering. While the Brown clustering was trained on the relatively small Europarl corpus, $k$-Means clustering seems to benefit from the word embeddings trained on documents from the much larger common crawl. Analysis In the majority of cases, a cluster size of 10 or 25 was selected on the development set during the hyperparameter search. Increasing the number of clusters introduces smaller clusters for which it is difficult to estimate the noise matrix, due to the limited training resources. On the other hand, decreasing the number of clusters can generalize too much, resulting in loss of information on the noise distribution. For the $\lambda $ parameter, a value of either 0.3 or 0.5 was chosen on the development set giving the group clusters more or equal weight compared to the global confusion matrix. This shows that the feature dependent noise matrices are important and have a positive impact on performance. Five confusion matrices for groups and the global matrix in the English data are shown as examples in Figure FIGREF10. One can see that the noise matrix can visibly differ depending on the cluster of the input word. Some of these differences can also be directly explained by flaws in the distant supervision method. The automatic annotation did not label any locations written in all upper-case letters as locations. Therefore, the noise distribution for all upper-cased locations differs from the distribution of other location names (cf. FIGREF10 and FIGREF10). The words April and June are used both as names for a month and as first names in English. This results in a very specific noise distribution with many temporal expressions being annotated as person entities (cf. FIGREF10). Similar to this, first-person names and also Asian words are likely to be labeled as persons by the automatic annotation method (cf. FIGREF10 and FIGREF10). All of these groups show traits that are not displayed in the global matrix, allowing the cluster-based models to outperform the other systems. Conclusions We have shown that the noise models with feature-dependent confusion matrices can be used effectively in practice. These models improve low-resource named entity recognition with noisy labels beyond all other tested baselines. Further, the feature-dependent confusion matrices are task-independent and could be used for other NLP tasks, which is one possible direction of future research. Acknowledgments The authors would like to thank Heike Adel, Annemarie Friedrich and the anonymous reviewers for their helpful comments. This work has been partially funded by Deutsche Forschungsgemeinschaft (DFG) under grant SFB 1102: Information Density and Linguistic Encoding.
Base , Base+Noise, Cleaning , Dynamic-CM , Global-CM, Global-ID-CM, Brown-CM , K-Means-CM
b6a6bdca6dee70f8fe6dd1cfe3bb2c5ff03b1605
b6a6bdca6dee70f8fe6dd1cfe3bb2c5ff03b1605_0
Q: Did they evaluate against baseline? Text: Introduction Most languages, even with millions of speakers, have not been the center for natural language processing and are counted as low-resource for tasks like named entity recognition (NER). Similarly, even for high-resource languages, there exists only few labeled data for most entity types beyond person, location and organization. Distantly- or weakly-supervised approaches have been proposed to solve this issue, e.g., by using lists of entities for labeling raw text BIBREF0, BIBREF1. This allows obtaining large amounts of training data quickly and cheaply. Unfortunately, these labels often contain errors and learning with this noisily-labeled data is difficult and can even reduce overall performance (see, e.g. BIBREF2). A variety of ideas have been proposed to overcome the issues of noisy training data. One popular approach is to estimate the relation between noisy and clean, gold-standard labels and use this noise model to improve the training procedure. However, most of these approaches only assume a dependency between the labels and do not take the features into account when modeling the label noise. This may disregard important information. The global confusion matrix BIBREF3 is a simple model which assumes that the errors in the noisy labels just depend on the clean labels. Our contributions are as follows: We propose to cluster the input words with the help of additional, unlabeled data. Based on this partition of the feature space, we obtain different confusion matrices that describe the relationship between clean and noisy labels. We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines. Related Work A popular approach is modeling the relationship between noisy and clean labels, i.e., estimating $p(\hat{y}|y)$ where $y$ is the clean and $\hat{y}$ the noisy label. For example, this can be represented as a noise or confusion matrix between the clean and the noisy labels, as explained in Section SECREF3. Having its roots in statistics BIBREF4, this or similar ideas have been recently studied in NLP BIBREF2, BIBREF3, BIBREF5, image classification BIBREF6, BIBREF7, BIBREF8 and general machine learning settings BIBREF9, BIBREF10, BIBREF11. All of these methods, however, do not take the features into account that are used to represent the instances during classification. In BIBREF12 only the noise type depends on $x$ but not the actual noise model. BIBREF13 and BIBREF14 use the learned feature representation $h$ to model $p(\hat{y}|y,h(x))$ for image classification and relation extraction respectively. In the work of BIBREF15, $p(y|\hat{y},h(x))$ is estimated to clean the labels for an image classification task. The survey by BIBREF16 gives a detailed overview about other techniques for learning in the presence of noisy labels. Specific to learning noisy sequence labels in NLP, BIBREF2 used a combination of clean and noisy data for low-resource POS tagging. BIBREF17 suggested partial annotation learning to lessen the effects of incomplete annotations and reinforcement learning for filtering incorrect labels for Chinese NER. BIBREF3 used a confusion matrix and proposed to leverage pairs of clean and noisy labels for its initialization, evaluating on English NER. For English NER and Chunking, BIBREF5 also used a confusion matrix but learned it with an EM approach and combined it with multi-task learning. Recently, BIBREF18 studied input from different, unreliable sources and how to combine them for NER prediction. Global Noise Model We assume a low-resource setting with a small set of gold standard annotated data $C$ consisting of instances with features $x$ and corresponding, clean labels $y$. Additionally, a large set of noisy instances $(x,\hat{y}) \in N$ is available. This can be obtained e.g. from weak or distant supervision. In a multi-class classification setting, we can learn the probability of a label $y$ having a specific class given the feature $x$ as where $k$ is the number of classes, $h$ is a learned, non-linear function (in our case a neural network) and $u$ is the softmax weights. This is our base model trained on $C$. Due to the errors in the labels, the clean and noisy labels have different distributions. Therefore, learning on $C$ and $N$ jointly can be detrimental for the performance of predicting unseen, clean instances. Nevertheless, the noisy-labeled data is still related to $C$ and can contain useful information that we want to successfully leverage. We transform the predicted (clean) distribution of the base model to the noisy label distribution The relationship is modeled using a confusion matrix (also called noise or transformation matrix or noise layer) with learned weights $b_{ij}$: The overall architecture is visualized in Figure FIGREF4. An important question is how to initialize this noise layer. As proposed by BIBREF3, we apply the same distant supervision technique used to obtain $N$ from unlabeled data on the already labeled instances in $C$. We thus obtain pairs of clean $y$ and corresponding noisy labels $\hat{y}$ for the same instances and the weights of the noise layer can be initialized as Following the naming by BIBREF14, we call this the global noise model. Feature Dependent Noise Model The global confusion matrix is a simple model which assumes that the errors in the noisy labels depend on the clean labels. An approach that also takes the corresponding features $x$ into account can model more complex relations. BIBREF15 and BIBREF14 use multiple layers of a neural network to model these relationships. However, in low resource settings with only small amounts of clean, supervised data, these more complex models can be difficult to learn. In contrast to that, larger amounts of unlabeled text are usually available even in low-resource settings. Therefore, we propose to use unsupervised clustering techniques to partition the feature space of the input words (and the corresponding instances) before estimating the noise matrices. To create the clusters, we use either Brown clustering BIBREF19 on the input words or $k$-means clustering BIBREF20 on the pretrained word embeddings after applying PCA BIBREF21. In sequence labeling tasks, the features $x$ of an instance usually consist of the input word $\iota (x)$ and its context. Given a clustering $\Pi $ over the input words $\lbrace \iota (x) \mid (x,y) \in C \cup N \rbrace $ consisting of clusters $\Pi _1, ..., \Pi _p$, we can group all clean and noisy instances into groups For each group, we construct an independent confusion matrix using Formulas DISPLAY_FORM3 and DISPLAY_FORM5. The prediction of the noisy label $\hat{y}$ (Formula DISPLAY_FORM2) then becomes Since the clustering is performed on unsupervised data, in low-resource settings, the size of an actual group of instances $G_q$ can be very small. If the number of members in a group is insufficient, the estimation of reliable noise matrices is difficult. This issue can be avoided by only using the largest groups and creating a separate group for all other instances. To make use of all the clusters, we alternatively propose to interpolate between the global and the group confusion matrix: The interpolation hyperparameter $\lambda $ (with $0 \le \lambda \le 1$) regulates the influence from the global matrix on the interpolated matrix. The selection of the largest groups and the interpolation can also be combined. Experiments We evaluate all models in five low-resource NER settings across different languages. Although the evaluation is performed for NER labeling, the proposed models are not restricted to the task of NER and can potentially be used for other tasks. Experiments ::: Models We follow the BiLSTM architecture from BIBREF3. Only the optimizer was changed for all models to NADAM BIBREF22 as this helped with convergence problems for increasing cluster numbers. The Base is trained only on clean data while Base+Noise is trained on both the clean and the noisy data without noise handling. Global-CM uses a global confusion matrix for all noisy instances to model the noise as proposed by BIBREF3 and presented in Section SECREF3. The same architecture is used for Global-ID-CM, but the confusion matrix is initialized with the identity matrix (instead of Formula DISPLAY_FORM5) and only adapted during training. The cluster-based models we propose in Section SECREF4 are Brown-CM and K-Means-CM. We experimented with numbers of clusters of 5, 10, 25 and 50. The models that select only the largest groups $G$ are marked as *-Freq and select either 30% or 50% of the clusters. The interpolation models have the postfix *-IP with $\lambda \in \lbrace 0.3, 0.5, 0.7\rbrace $ . The combination of both is named *-Freq-IP. As for all other hyperparameters, the choice was taken on the development set. We implemented the Cleaning BIBREF15 and Dynamic-CM BIBREF14 models. Both were not developed for sequence labeling tasks and therefore needed to be adapted. For the Cleaning model, we followed the instructions by BIBREF3. The embedding and prediction components of the Dynamic-CM model were replaced according to our base model. The output of the dense layer was used as input to the dynamic matrix generation. We experimented with and without their proposed trace loss. The training for all models was performed with labels in the IO format. The predicted labels for the test data were converted and evaluated in IOB2 with the official CoNLL evaluation script. The IOB2 format would increase matrix size making the confusion matrix estimation more difficult without adding much information in practice. In preliminary experiments, this decreased performance in particular for low-resource settings. Experiments ::: Data The models were tested on the four CoNLL datasets for English, German, Spanish and Dutch BIBREF23, BIBREF24 using the standard split, and the Estonian data from BIBREF25 using a 10/10/80 split for dev/test/train sets. For each language, the labels of 1% of the training data (ca. 2100 instances) were used to obtain a low-resource setting. We treat this as the clean data $C$. The rest of the (now unlabeled) training data was used for the automatic annotation which we treat as noisily labeled data $N$. We applied the distant supervision method by BIBREF1, which uses lists and gazetteer information for NER labeling. As seen in Table TABREF19, this method reaches rather high precision but has a poor recall. The development set of the original dataset is used for model-epoch and hyperparameter selection, and the results are reported on the complete, clean test set. The words were embedded with the pretrained fastText vectors BIBREF26. The clusters were calculated on the unlabeled version of the full training data. Additionally, the Brown clusters used the language-specific documents from the Europarl corpus BIBREF27. Experimental Results The results of all models are shown in Table TABREF9. The newly proposed cluster-based models achieve the best performance across all languages and outperform all other models in particular for Dutch and English. The combination of interpolation with the global matrix and the selection of large clusters is almost always beneficial compared to the cluster-based models using only one of the methods. In general, both clustering methods achieve similar performance in combination with interpolation and selection, except for English, where Brown clustering performs worse than $k$-Means clustering. While the Brown clustering was trained on the relatively small Europarl corpus, $k$-Means clustering seems to benefit from the word embeddings trained on documents from the much larger common crawl. Analysis In the majority of cases, a cluster size of 10 or 25 was selected on the development set during the hyperparameter search. Increasing the number of clusters introduces smaller clusters for which it is difficult to estimate the noise matrix, due to the limited training resources. On the other hand, decreasing the number of clusters can generalize too much, resulting in loss of information on the noise distribution. For the $\lambda $ parameter, a value of either 0.3 or 0.5 was chosen on the development set giving the group clusters more or equal weight compared to the global confusion matrix. This shows that the feature dependent noise matrices are important and have a positive impact on performance. Five confusion matrices for groups and the global matrix in the English data are shown as examples in Figure FIGREF10. One can see that the noise matrix can visibly differ depending on the cluster of the input word. Some of these differences can also be directly explained by flaws in the distant supervision method. The automatic annotation did not label any locations written in all upper-case letters as locations. Therefore, the noise distribution for all upper-cased locations differs from the distribution of other location names (cf. FIGREF10 and FIGREF10). The words April and June are used both as names for a month and as first names in English. This results in a very specific noise distribution with many temporal expressions being annotated as person entities (cf. FIGREF10). Similar to this, first-person names and also Asian words are likely to be labeled as persons by the automatic annotation method (cf. FIGREF10 and FIGREF10). All of these groups show traits that are not displayed in the global matrix, allowing the cluster-based models to outperform the other systems. Conclusions We have shown that the noise models with feature-dependent confusion matrices can be used effectively in practice. These models improve low-resource named entity recognition with noisy labels beyond all other tested baselines. Further, the feature-dependent confusion matrices are task-independent and could be used for other NLP tasks, which is one possible direction of future research. Acknowledgments The authors would like to thank Heike Adel, Annemarie Friedrich and the anonymous reviewers for their helpful comments. This work has been partially funded by Deutsche Forschungsgemeinschaft (DFG) under grant SFB 1102: Information Density and Linguistic Encoding.
Yes
8951fde01b1643fcb4b91e51f84e074ce3b69743
8951fde01b1643fcb4b91e51f84e074ce3b69743_0
Q: How they evaluate their approach? Text: Introduction Most languages, even with millions of speakers, have not been the center for natural language processing and are counted as low-resource for tasks like named entity recognition (NER). Similarly, even for high-resource languages, there exists only few labeled data for most entity types beyond person, location and organization. Distantly- or weakly-supervised approaches have been proposed to solve this issue, e.g., by using lists of entities for labeling raw text BIBREF0, BIBREF1. This allows obtaining large amounts of training data quickly and cheaply. Unfortunately, these labels often contain errors and learning with this noisily-labeled data is difficult and can even reduce overall performance (see, e.g. BIBREF2). A variety of ideas have been proposed to overcome the issues of noisy training data. One popular approach is to estimate the relation between noisy and clean, gold-standard labels and use this noise model to improve the training procedure. However, most of these approaches only assume a dependency between the labels and do not take the features into account when modeling the label noise. This may disregard important information. The global confusion matrix BIBREF3 is a simple model which assumes that the errors in the noisy labels just depend on the clean labels. Our contributions are as follows: We propose to cluster the input words with the help of additional, unlabeled data. Based on this partition of the feature space, we obtain different confusion matrices that describe the relationship between clean and noisy labels. We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines. Related Work A popular approach is modeling the relationship between noisy and clean labels, i.e., estimating $p(\hat{y}|y)$ where $y$ is the clean and $\hat{y}$ the noisy label. For example, this can be represented as a noise or confusion matrix between the clean and the noisy labels, as explained in Section SECREF3. Having its roots in statistics BIBREF4, this or similar ideas have been recently studied in NLP BIBREF2, BIBREF3, BIBREF5, image classification BIBREF6, BIBREF7, BIBREF8 and general machine learning settings BIBREF9, BIBREF10, BIBREF11. All of these methods, however, do not take the features into account that are used to represent the instances during classification. In BIBREF12 only the noise type depends on $x$ but not the actual noise model. BIBREF13 and BIBREF14 use the learned feature representation $h$ to model $p(\hat{y}|y,h(x))$ for image classification and relation extraction respectively. In the work of BIBREF15, $p(y|\hat{y},h(x))$ is estimated to clean the labels for an image classification task. The survey by BIBREF16 gives a detailed overview about other techniques for learning in the presence of noisy labels. Specific to learning noisy sequence labels in NLP, BIBREF2 used a combination of clean and noisy data for low-resource POS tagging. BIBREF17 suggested partial annotation learning to lessen the effects of incomplete annotations and reinforcement learning for filtering incorrect labels for Chinese NER. BIBREF3 used a confusion matrix and proposed to leverage pairs of clean and noisy labels for its initialization, evaluating on English NER. For English NER and Chunking, BIBREF5 also used a confusion matrix but learned it with an EM approach and combined it with multi-task learning. Recently, BIBREF18 studied input from different, unreliable sources and how to combine them for NER prediction. Global Noise Model We assume a low-resource setting with a small set of gold standard annotated data $C$ consisting of instances with features $x$ and corresponding, clean labels $y$. Additionally, a large set of noisy instances $(x,\hat{y}) \in N$ is available. This can be obtained e.g. from weak or distant supervision. In a multi-class classification setting, we can learn the probability of a label $y$ having a specific class given the feature $x$ as where $k$ is the number of classes, $h$ is a learned, non-linear function (in our case a neural network) and $u$ is the softmax weights. This is our base model trained on $C$. Due to the errors in the labels, the clean and noisy labels have different distributions. Therefore, learning on $C$ and $N$ jointly can be detrimental for the performance of predicting unseen, clean instances. Nevertheless, the noisy-labeled data is still related to $C$ and can contain useful information that we want to successfully leverage. We transform the predicted (clean) distribution of the base model to the noisy label distribution The relationship is modeled using a confusion matrix (also called noise or transformation matrix or noise layer) with learned weights $b_{ij}$: The overall architecture is visualized in Figure FIGREF4. An important question is how to initialize this noise layer. As proposed by BIBREF3, we apply the same distant supervision technique used to obtain $N$ from unlabeled data on the already labeled instances in $C$. We thus obtain pairs of clean $y$ and corresponding noisy labels $\hat{y}$ for the same instances and the weights of the noise layer can be initialized as Following the naming by BIBREF14, we call this the global noise model. Feature Dependent Noise Model The global confusion matrix is a simple model which assumes that the errors in the noisy labels depend on the clean labels. An approach that also takes the corresponding features $x$ into account can model more complex relations. BIBREF15 and BIBREF14 use multiple layers of a neural network to model these relationships. However, in low resource settings with only small amounts of clean, supervised data, these more complex models can be difficult to learn. In contrast to that, larger amounts of unlabeled text are usually available even in low-resource settings. Therefore, we propose to use unsupervised clustering techniques to partition the feature space of the input words (and the corresponding instances) before estimating the noise matrices. To create the clusters, we use either Brown clustering BIBREF19 on the input words or $k$-means clustering BIBREF20 on the pretrained word embeddings after applying PCA BIBREF21. In sequence labeling tasks, the features $x$ of an instance usually consist of the input word $\iota (x)$ and its context. Given a clustering $\Pi $ over the input words $\lbrace \iota (x) \mid (x,y) \in C \cup N \rbrace $ consisting of clusters $\Pi _1, ..., \Pi _p$, we can group all clean and noisy instances into groups For each group, we construct an independent confusion matrix using Formulas DISPLAY_FORM3 and DISPLAY_FORM5. The prediction of the noisy label $\hat{y}$ (Formula DISPLAY_FORM2) then becomes Since the clustering is performed on unsupervised data, in low-resource settings, the size of an actual group of instances $G_q$ can be very small. If the number of members in a group is insufficient, the estimation of reliable noise matrices is difficult. This issue can be avoided by only using the largest groups and creating a separate group for all other instances. To make use of all the clusters, we alternatively propose to interpolate between the global and the group confusion matrix: The interpolation hyperparameter $\lambda $ (with $0 \le \lambda \le 1$) regulates the influence from the global matrix on the interpolated matrix. The selection of the largest groups and the interpolation can also be combined. Experiments We evaluate all models in five low-resource NER settings across different languages. Although the evaluation is performed for NER labeling, the proposed models are not restricted to the task of NER and can potentially be used for other tasks. Experiments ::: Models We follow the BiLSTM architecture from BIBREF3. Only the optimizer was changed for all models to NADAM BIBREF22 as this helped with convergence problems for increasing cluster numbers. The Base is trained only on clean data while Base+Noise is trained on both the clean and the noisy data without noise handling. Global-CM uses a global confusion matrix for all noisy instances to model the noise as proposed by BIBREF3 and presented in Section SECREF3. The same architecture is used for Global-ID-CM, but the confusion matrix is initialized with the identity matrix (instead of Formula DISPLAY_FORM5) and only adapted during training. The cluster-based models we propose in Section SECREF4 are Brown-CM and K-Means-CM. We experimented with numbers of clusters of 5, 10, 25 and 50. The models that select only the largest groups $G$ are marked as *-Freq and select either 30% or 50% of the clusters. The interpolation models have the postfix *-IP with $\lambda \in \lbrace 0.3, 0.5, 0.7\rbrace $ . The combination of both is named *-Freq-IP. As for all other hyperparameters, the choice was taken on the development set. We implemented the Cleaning BIBREF15 and Dynamic-CM BIBREF14 models. Both were not developed for sequence labeling tasks and therefore needed to be adapted. For the Cleaning model, we followed the instructions by BIBREF3. The embedding and prediction components of the Dynamic-CM model were replaced according to our base model. The output of the dense layer was used as input to the dynamic matrix generation. We experimented with and without their proposed trace loss. The training for all models was performed with labels in the IO format. The predicted labels for the test data were converted and evaluated in IOB2 with the official CoNLL evaluation script. The IOB2 format would increase matrix size making the confusion matrix estimation more difficult without adding much information in practice. In preliminary experiments, this decreased performance in particular for low-resource settings. Experiments ::: Data The models were tested on the four CoNLL datasets for English, German, Spanish and Dutch BIBREF23, BIBREF24 using the standard split, and the Estonian data from BIBREF25 using a 10/10/80 split for dev/test/train sets. For each language, the labels of 1% of the training data (ca. 2100 instances) were used to obtain a low-resource setting. We treat this as the clean data $C$. The rest of the (now unlabeled) training data was used for the automatic annotation which we treat as noisily labeled data $N$. We applied the distant supervision method by BIBREF1, which uses lists and gazetteer information for NER labeling. As seen in Table TABREF19, this method reaches rather high precision but has a poor recall. The development set of the original dataset is used for model-epoch and hyperparameter selection, and the results are reported on the complete, clean test set. The words were embedded with the pretrained fastText vectors BIBREF26. The clusters were calculated on the unlabeled version of the full training data. Additionally, the Brown clusters used the language-specific documents from the Europarl corpus BIBREF27. Experimental Results The results of all models are shown in Table TABREF9. The newly proposed cluster-based models achieve the best performance across all languages and outperform all other models in particular for Dutch and English. The combination of interpolation with the global matrix and the selection of large clusters is almost always beneficial compared to the cluster-based models using only one of the methods. In general, both clustering methods achieve similar performance in combination with interpolation and selection, except for English, where Brown clustering performs worse than $k$-Means clustering. While the Brown clustering was trained on the relatively small Europarl corpus, $k$-Means clustering seems to benefit from the word embeddings trained on documents from the much larger common crawl. Analysis In the majority of cases, a cluster size of 10 or 25 was selected on the development set during the hyperparameter search. Increasing the number of clusters introduces smaller clusters for which it is difficult to estimate the noise matrix, due to the limited training resources. On the other hand, decreasing the number of clusters can generalize too much, resulting in loss of information on the noise distribution. For the $\lambda $ parameter, a value of either 0.3 or 0.5 was chosen on the development set giving the group clusters more or equal weight compared to the global confusion matrix. This shows that the feature dependent noise matrices are important and have a positive impact on performance. Five confusion matrices for groups and the global matrix in the English data are shown as examples in Figure FIGREF10. One can see that the noise matrix can visibly differ depending on the cluster of the input word. Some of these differences can also be directly explained by flaws in the distant supervision method. The automatic annotation did not label any locations written in all upper-case letters as locations. Therefore, the noise distribution for all upper-cased locations differs from the distribution of other location names (cf. FIGREF10 and FIGREF10). The words April and June are used both as names for a month and as first names in English. This results in a very specific noise distribution with many temporal expressions being annotated as person entities (cf. FIGREF10). Similar to this, first-person names and also Asian words are likely to be labeled as persons by the automatic annotation method (cf. FIGREF10 and FIGREF10). All of these groups show traits that are not displayed in the global matrix, allowing the cluster-based models to outperform the other systems. Conclusions We have shown that the noise models with feature-dependent confusion matrices can be used effectively in practice. These models improve low-resource named entity recognition with noisy labels beyond all other tested baselines. Further, the feature-dependent confusion matrices are task-independent and could be used for other NLP tasks, which is one possible direction of future research. Acknowledgments The authors would like to thank Heike Adel, Annemarie Friedrich and the anonymous reviewers for their helpful comments. This work has been partially funded by Deutsche Forschungsgemeinschaft (DFG) under grant SFB 1102: Information Density and Linguistic Encoding.
They evaluate newly proposed models in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise
38c74ab8292a94fc5a82999400ee9c06be19f791
38c74ab8292a94fc5a82999400ee9c06be19f791_0
Q: How large is the corpus? Text: Introduction While document classification models should be objective and independent from human biases in documents, research have shown that the models can learn human biases and therefore be discriminatory towards particular demographic groups BIBREF0, BIBREF1, BIBREF2. The goal of fairness-aware document classifiers is to train and build non-discriminatory models towards people no matter what their demographic attributes are, such as gender and ethnicity. Existing research BIBREF0, BIBREF3, BIBREF4, BIBREF5, BIBREF1 in evaluating fairness of document classifiers focus on the group fairness BIBREF6, which refers to every demographic group has equal probability of being assigned to the positive predicted document category. However, the lack of original author demographic attributes and multilingual corpora bring challenges towards the fairness evaluation of document classifiers. First, the datasets commonly used to build and evaluate the fairness of document classifiers obtain derived synthetic author demographic attributes instead of the original author information. The common data sources either derive from Wikipedia toxic comments BIBREF0, BIBREF4, BIBREF5 or synthetic document templates BIBREF3, BIBREF4. The Wikipedia Talk corpus BIBREF7 provides demographic information of annotators instead of the authors, Equity Evaluation Corpus BIBREF3 are created by sentence templates and combinations of racial names and gender coreferences. While existing work BIBREF8, BIBREF9 infers user demographic information (white/black, young/old) from the text, such inference is still likely to cause confounding errors that impact and break the independence between demographic factors and the fairness evaluation of text classifiers. Second, existing research in the fairness evaluation mainly focus on only English resources, such as age biases in blog posts BIBREF9, gender biases in Wikipedia comments BIBREF0 and racial biases in hate speech detection BIBREF8. Different languages have shown different patterns of linguistic variations across the demographic attributes BIBREF10, BIBREF11, methods BIBREF12, BIBREF4 to reduce and evaluate the demographic bias in English corpora may not apply to other languages. For example, Spanish has gender-dependent nouns, but this does not exist in English BIBREF2; and Portuguese varies across Brazil and Portugal in both word usage and grammar BIBREF13. The rich variations have not been explored under the fairness evaluation due to lack of multilingual corpora. Additionally, while we have hate speech detection datasets in multiple languages BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, there is still no integrated multilingual corpora that contain author demographic attributes which can be used to measure group fairness. The lack of author demographic attributes and multilingual datasets limits research for evaluating classifier fairness and developing unbiased classifiers. In this study, we combine previously published corpora labeled for Twitter hate speech recognition in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17, and publish this multilingual data augmented with author-level demographic information for four attributes: race, gender, age and country. The demographic factors are inferred from user profiles, which are independent from text documents, the tweets. To our best knowledge, this is the first multilingual hate speech corpus annotated with author attributes aiming for fairness evaluation. We start with presenting collection and inference steps of the datasets. Next, we take an exploratory study on the language variations across demographic groups on the English dataset. We then experiment with four multiple classification models to establish baseline levels of this corpus. Finally, we evaluate the fairness performance of those document classifiers. Data We assemble the annotated datasets for hate speech classification. To narrow down the data sources, we limit our dataset sources to the unique online social media site, Twitter. We have requested 16 published Twitter hate speech datasets, and finally obtained 7 of them in five languages. By using the Twitter streaming API, we collected the tweets annotated by hate speech labels and their corresponding user profiles in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17. We binarize all tweets' labels (indicating whether a tweet has indications of hate speech), allowing to merge the different label sets and reduce the data sparsity. Whether a tweet is considered hate speech heavily depends on who the speaker is; for example, whether a racial slur is intended as hate speech depends in part on the speaker's race BIBREF14. Therefore, hate speech classifiers may not generalize well across all groups of people, and disparities in the detection offensive speech could lead to bias in content moderation BIBREF21. Our contribution is to further annotate the data with user demographic attributes inferred from their public profiles, thus creating a corpus suitable for evaluating author-level fairness for this hate speech recognition task across multiple languages. Data ::: User Attribute Inference We consider four user factors of age, race, gender and geographic location. For location, we inference two granularities, country and US region, but only experiment with the country attribute. While the demographic attributes can be inferred through tweets BIBREF22, BIBREF8, we intentionally exclude the contents from the tweets if they infer these user attributes, in order to make the evaluation of fairness more reliable and independent. If users were grouped based on attributes inferred from their text, then any differences in text classification across those groups could be related to the same text. Instead, we infer attributes from public user profile information (i.e., description, name and photo). Data ::: User Attribute Inference ::: Age, Race, Gender. We infer these attributes from each user's profile image by using Face++ (https://www.faceplusplus.com/), a computer vision API that provides estimates of demographic characteristics. Empirical comparisons of facial recognition APIs have found that Face++ is the most accurate tool on Twitter data BIBREF23 and works comparatively better for darker skins BIBREF24. For the gender, we choose the binary categories (male/female) by the predicted probabilities. We map the racial outputs into four categories: Asian, Black, Latino and White. We only keep users that appear to be at least 13 years old, and we save the first result from the API if multiple faces are identified. We experiment and evaluate with binarization of race and age with roughly balanced distributions (white and nonwhite, $\le $ median vs. elder age) to consider a simplified setting across different languages, since race is harder to infer accurately. Data ::: User Attribute Inference ::: Country. The country-level language variations can bring challenges that are worth to explore. We extract geolocation information from users whose profiles contained either numerical location coordinates or a well-formatted (matching a regular expression) location name. We fed the extracted values to the Google Maps API (https://maps.googleapis.com) to obtain structured location information (city, state, country). We first count the main country source and then binarize the country to indicate if a user is in the main country or not. For example, the majority of users in the English are from the United States (US), therefore, we can binarize the country attributes to indicate if the users are in the US or not. Data ::: Corpus Summary We show the corpus statistics in Table TABREF8 and summarize the full demographic distributions in Table TABREF9. The binary demographic attributes (age, country, gender, race) can bring several benefits. First, we can create comparatively balanced label distributions. We can observe that there are differences in the race and gender among Italian and Polish data, while other attributes across the other languages show comparably balanced demographic distributions. Second, we can reduce errors inferred from the Face++ on coarse labels. Third, it is more convenient for us to analyze, conduct experiments and evaluate the group fairness of document classifiers. Table TABREF8 presents different patterns of the corpus. The Polish data has the smallest users. This is because the data focuses on the people who own the most popular accounts in the Polish data BIBREF16, the other data collected tweets randomly. And the dataset shows a much more sparse distribution of the hate speech label than the other languages. Table TABREF9 presents different patterns of the user attributes. English, Portuguese and Spanish users are younger than the Italian and Polish users in the collected data. And both Italian and Polish show more skewed demographic distributions in country, gender and race, while the other datasets show more balanced distributions. Data ::: Demographic Inference Accuracy Image-based approaches will have inaccuracies, as a person's demographic attributes cannot be conclusively determined merely from their appearance. However, given the difficulty in obtaining ground truth values, we argue that automatically inferred attributes can still be informative for studying classifier fairness. If a classifier performs significantly differently across different groups of users, then this shows that the classifier is biased along certain groupings, even if those groupings are not perfectly aligned with the actual attributes they are named after. This subsection tries to quantify how reliably these groupings correspond to the demographic variables. Prior research found that Face++ achieves 93.0% and 92.0% accuracy on gender and ethnicity evaluations BIBREF23. We further conduct a small evaluation on the hate speech corpus by a small sample of annotated user profile photos providing a rough estimate of accuracy while acknowledging that our annotations are not ground truth. We obtained the annotations from the crowdsourcing website, Figure Eight (https://figure-eight.com/). We randomly sampled 50 users whose attributes came from Face++ in each language. We anonymize the user profiles and feed the information to the crowdsourcing website. Three annotators annotated each user photo with the binary demographic categories. To select qualified annotators and ensure quality of the evaluations, we set up 5 golden standard annotation questions for each language. The annotators can join the evaluation task only by passing the golden standard questions. We decide demographic attributes by majority votes and present evaluation results in Table TABREF11. Our final evaluations show that overall the Face++ achieves averaged accuracy scores of 82.8%, 88.4% and 94.4% for age, race and gender respectively. Data ::: Privacy Considerations To facilitate the study of classification fairness, we will publicly distribute this anonymized corpus with the inferred demographic attributes including both original and binarized versions. To preserve user privacy, we will not publicize the personal profile information, including user ids, photos, geocoordinates as well as other user profile information, which were used to infer the demographic attributes. We will, however, provide inferred demographic attributes in their original formats from the Face++ and Google Maps based on per request to allow wider researchers and communities to replicate the methodology and probe more depth of fairness in document classification. Language Variations across Demographic Groups Demographic factors can improve the performances of document classifiers BIBREF25, and demographic variations root in language, especially in social media data BIBREF26, BIBREF25. For example, language styles are highly correlated with authors' demographic attributes, such as age, race, gender and location BIBREF27, BIBREF28. Research BIBREF29, BIBREF12, BIBREF30 find that biases and stereotypes exist in word embeddings, which is widely used in document classification tasks. For example, “receptionist” is closer to females while “programmer” is closer to males, and “professor” is closer to Asian Americans while “housekeeper” is closer to Hispanic Americans. This motivates us to explore and test if the language variations hold in our particular dataset, how strong the effects are. We conduct the empirical analysis of demographic predictability on the English dataset. Language Variations across Demographic Groups ::: Are Demographic Factors Predictable in Documents? We examine how accurately the documents can predict author demographic attributes from three different levels: Word-level. We extract TF-IDF-weighted 1-, 2-grams features. POS-level. We use Tweebo parser BIBREF31 to tag and extract POS features. We count the POS tag and then normalize the counts for each document. Topic-level. We train a Latent Dirichlet Allocation BIBREF32 model with 20 topics using Gensim BIBREF33 with default parameters. Then a document can be represented as a probabilistic distribution over the 20 topics. We shuffle and split data into training (70%) and test (30%) sets. Three logistic classifiers are trained by the three levels of features separately. We measure the prediction accuracy and show the absolute improvements in Figure FIGREF18. The improved prediction accuracy scores over majority baselines suggest that language variations across demographic groups are encoded in the text documents. The results show that documents are the most predictable to the age attribute. We can also observe that the word is the most predictable feature to demographic factors, while the POS feature is least predictable towards the country factor. These suggest there might be a connection between language variations and demographic groups. This motivates us to further explore the language variations based on word features. We rank the word features by mutual information classification BIBREF34 and present the top 10 unigram features in Table TABREF14. The qualitative results show the most predictable word features towards the demographic groups and suggest such variations may impact extracted feature representations and further training fair document classifiers. The Table TABREF14 shows that when classifying hate speech tweets, the n-words and b-words are more significant correlated with the white instead of the other racial groups. However, this shows an opposite view than the existing work BIBREF8, which presents the two types of words are more significantly correlated with the black. This can highlight the values of our approach that to avoid confounding errors, we obtain author demographic information independently from the user generated documents. Experiments Demographic variations root in documents, especially in social media data BIBREF26, BIBREF25, BIBREF10. Such variations could further impact the performance and fairness of document classifiers. In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37. We present the baseline results of both performance and fairness evaluations across the multilingual corpus. Experiments ::: Data Preprocessing To anonymize user information, we hash user and tweet ids and then replace hyperlinks, usernames, and hashtags with generic symbols (URL, USER, HASHTAG). Documents are lowercased and tokenized using NLTK BIBREF38. The corpus is randomly split into training (70%), development (15%), and test (15%) sets. We train the models on the training set and find the optimal hyperparameters on the development set before final evaluations on the test set. We randomly shuffle the training data at the beginning of each training epoch. Experiments ::: Baseline Models We implement and experiment four baseline classification models. To compare fairly, we keep the feature size up to 15K for each classifier across all five languages. We calculate the weight for each document category by $\frac{N}{N_l}$ BIBREF39, where $N$ is the number of documents in each language and $N_l$ is the number of documents labeled by the category. Particularly, for training BERT model, we append two additional tokens, “[CLS]” and “[SEP]”, at the start and end of each document respectively. For the neural models, we pad each document or drop rest of words up to 40 tokens. We use “unknown” as a replacement for unknown tokens. We initialize CNN and RNN classifiers by pre-trained word embeddings BIBREF40, BIBREF41, BIBREF42, BIBREF43 and train the networks up to 10 epochs. Experiments ::: Baseline Models ::: LR. We first extract TF-IDF-weighted features of uni-, bi-, and tri-grams on the corpora, using the most frequent 15K features with the minimum feature frequency as 2. We then train a LogisticRegression from scikit-learn BIBREF34. We use “liblinear” as the solver function and leave the other parameters as default. Experiments ::: Baseline Models ::: CNN. We implement the Convolutional Neural Network (CNN) classifier described in BIBREF36, BIBREF44 by Keras BIBREF45. We first apply 100 filters with three different kernel sizes, 3, 4 and 5. After the convolution operations, we feed the concatenated features to a fully connected layer and output document representations with 100 dimensions. We apply “softplus” function with a l2 regularization with $.03$ and a dropout rate with $.3$ in the dense layer. The model feeds the document representation to final prediction. We train the model with batch size 64, set model optimizer as Adam BIBREF46 and calculate loss values by the cross entropy function. We keep all other parameter settings as described in the paper BIBREF36. Experiments ::: Baseline Models ::: RNN. We build a recurrent neural network (RNN) classifier by using bi-directional Gated Recurrent Unit (bi-GRU) BIBREF35, BIBREF4. We set the output dimension of GRU as 200 and apply a dropout on the output with rate $.2$. We optimize the RNN with RMSprop BIBREF47 and use the same loss function and batch size as the CNN model. We leave the other parameters as default in the Keras BIBREF45. Experiments ::: Baseline Models ::: BERT BERT is a transformer-based pre-trained language model which was well trained on multi-billion sentences publicly available on the web BIBREF37, which can effectively generate the precise text semantics and useful signals. We implement a BERT-based classification model by HuggingFace's Transformers BIBREF48. The model encodes each document into a fixed size (768) of representation and feed to a linear prediction layer. The model is optimized by AdamW with a warmup and learning rate as $.1$ and $2e^{-5}$ respectively. We leave parameters as their default, conduct fine-tuning steps with 4 epochs and set batch size as 32 BIBREF49. The classification model loads “bert-base-uncased” pre-trained BERT model for English and “bert-base-multilingual-uncased” multilingual BERT model BIBREF50 for the other languages. The multilingual BERT model follows the same method of BERT by using Wikipedia text from the top 104 languages. Due to the label imbalance shown in Table TABREF8, we balance training instances by randomly oversampling the minority during the training process. Experiments ::: Evaluation Metrics ::: Performance Evaluation. To measure overall performance, we evaluate models by four metrics: accuracy (Acc), weighted F1 score (F1-w), macro F1 score (F1-m) and area under the ROC curve (AUC). The F1 score coherently combines both precision and recall by $2*\frac{precision*recall}{precision+recall}$. We report F1-m considering that the datasets are imbalanced. Experiments ::: Evaluation Metrics ::: Fairness Evaluation. To evaluate group fairness, we measure the equality differences (ED) of true positive/negative and false positive/negative rates for each demographic factor. ED is a standard metric to evaluate fairness and bias of document classifiers BIBREF0, BIBREF4, BIBREF5. This metric sums the differences between the rates within specific user groups and the overall rates. Taking the false positive rate (FPR) as an example, we calculate the equality difference by: , where $D$ is a demographic factor (e.g., race) and $d$ is a demographic group (e.g., white or nonwhite). Results We have presented our evaluation results of performance and fairness in Table TABREF20 and Table TABREF29 respectively. Country and race have very skewed distributions in the Italian and Polish corpora, therefore, we omit fairness evaluation on the two factors. Results ::: Overall performance evaluation. Table TABREF20 demonstrates the performances of the baseline classifiers for hate speech classification on the corpus we proposed. Results are obtained from the five languages covered in our corpus respectively. Among the four baseline classifiers, LR, CNN and RNN consistently perform well on all languages. Moreover, neural-based models (CNN and RNN) substantially outperform LR on four out of five languages (except Spanish). However, the results obtained by BERT are relatively lower than the other baselines, and show more significant gap in the English dataset. One possible explanation is BERT was pre-trained on Wikipedia documents, which are significantly different from the Twitter corpus in document length, word usage and grammars. For example, each tweet is a short document with 20 tokens, but the BERT is trained on long documents up to 512 tokens. Existing research suggests that fine-tuning on the multilingual corpus can further improve performance of BERT models BIBREF49. Results ::: Group fairness evaluation. We have measured the group fairness in Table TABREF29. Generally, the RNN classifier achieves better and more stable performance across major fairness evaluation tasks. By comparing the different baseline classifiers, we can find out that the LR usually show stronger biases than the neural classification models among majority of the tasks. While the BERT classifier performs comparatively lower accuracy and F1 scores, the classifier has less biases on the most of the datasets. However, biases can significantly increases for the Portuguese dataset when the BERT classifier achieves better performance. We examine the relationship by building linear model between two differences: the performance differences between the RNN and other classifiers, the SUM-ED differences between RNN and other classifiers. We find that the classification performance does not have significantly ($p-value > .05$) correlation with fairness and bias. The significant biases of classifiers varies across tasks and languages: the classifiers trained on Polish and Italian are biased the most by Age and Gender, the classifiers trained on Spanish and Portuguese are most biased the most by Country, and the classifiers trained on English tweets are the most unbiased throughout all the attributes. Classifiers usually have very high bias scores on both gender and age in Italian and Polish data. We find that the age and gender both have very skewed distributions in the Italian and Polish datasets. Overall, our baselines provide a promising start for evaluating future new methods of reducing demographic biases for document classification under the multilingual setting. Conclusion In this paper, we propose a new multilingual dataset covering four author demographic annotations (age, gender, race and country) for the hate speech detection task. We show the experimental results of several popular classification models in both overall and fairness performance evaluations. Our empirical exploration indicates that language variations across demographic groups can lead to biased classifiers. This dataset can be used for measuring fairness of document classifiers along author-level attributes and exploring bias factors across multilingual settings and multiple user factors. The proposed framework for inferring the author demographic attributes can be used to generate more large-scale datasets or even applied to other social media sites (e.g., Amazon and Yelp). While we encode the demographic attributes into categories in this work, we will provide inferred probabilities of the demographic attributes from Face++ to allow for broader research exploration. Our code, anonymized data and data statement BIBREF51 will be publicly available at https://github.com/xiaoleihuang/Multilingual_Fairness_LREC. Conclusion ::: Limitations While our dataset provides new information on author demographic attributes, and our analysis suggest directions toward reducing bias, a number of limitations must be acknowledged in order to appropriately interpret our findings. First, inferring user demographic attributes by profile information can be risky due to the accuracy of the inference toolkit. In this work, we present multiple strategies to reduce the errors bringing by the inference toolkits, such as human evaluation, manually screening and using external public profile information (Instagram). However, we cannot guarantee perfect accuracy of the demographic attributes, and, errors in the attributes may themselves be “unfair” or unevenly distributed due to bias in the inference tools BIBREF24. Still, obtaining individual-level attributes is an important step toward understanding classifier fairness, and our results found biases across these groupings of users, even if some of the groupings contained errors. Second, because methods for inferring demographic attributes are not accurate enough to provide fine-grained information, our attribute categories are still too coarse-grained (binary age groups and gender, and only four race categories). Using coarse-grained attributes would hide the identities of specific demographic groups, including other racial minorities and people with non-binary gender. Broadening our analyses and evaluations to include more attribute values may require better methods of user attribute inference or different sources of data. Third, language variations across demographic groups might introduce annotation biases. Existing research BIBREF52 shows that annotators are more likely to annotate tweets containing African American English words as hate speech. Additionally, the nationality and educational level might also impact on the quality of annotations BIBREF20. Similarly, different annotation sources of our dataset (which merged two different corpora) might have variations in annotating schema. To reduce annotation biases due to the different annotating schema, we merge the annotations into the two most compatible document categories: normal and hate speech. Annotation biases might still exist, therefore, we will release our original anonymized multilingual dataset for research communities. Acknowledgement The authors thank the anonymous reviews for their insightful comments and suggestions. This work was supported in part by the National Science Foundation under award number IIS-1657338. This work was also supported in part by a research gift from Adobe.
It contains 106,350 documents
ff307b10e56f75de6a32e68e25a69899478a13e4
ff307b10e56f75de6a32e68e25a69899478a13e4_0
Q: Which document classifiers do they experiment with? Text: Introduction While document classification models should be objective and independent from human biases in documents, research have shown that the models can learn human biases and therefore be discriminatory towards particular demographic groups BIBREF0, BIBREF1, BIBREF2. The goal of fairness-aware document classifiers is to train and build non-discriminatory models towards people no matter what their demographic attributes are, such as gender and ethnicity. Existing research BIBREF0, BIBREF3, BIBREF4, BIBREF5, BIBREF1 in evaluating fairness of document classifiers focus on the group fairness BIBREF6, which refers to every demographic group has equal probability of being assigned to the positive predicted document category. However, the lack of original author demographic attributes and multilingual corpora bring challenges towards the fairness evaluation of document classifiers. First, the datasets commonly used to build and evaluate the fairness of document classifiers obtain derived synthetic author demographic attributes instead of the original author information. The common data sources either derive from Wikipedia toxic comments BIBREF0, BIBREF4, BIBREF5 or synthetic document templates BIBREF3, BIBREF4. The Wikipedia Talk corpus BIBREF7 provides demographic information of annotators instead of the authors, Equity Evaluation Corpus BIBREF3 are created by sentence templates and combinations of racial names and gender coreferences. While existing work BIBREF8, BIBREF9 infers user demographic information (white/black, young/old) from the text, such inference is still likely to cause confounding errors that impact and break the independence between demographic factors and the fairness evaluation of text classifiers. Second, existing research in the fairness evaluation mainly focus on only English resources, such as age biases in blog posts BIBREF9, gender biases in Wikipedia comments BIBREF0 and racial biases in hate speech detection BIBREF8. Different languages have shown different patterns of linguistic variations across the demographic attributes BIBREF10, BIBREF11, methods BIBREF12, BIBREF4 to reduce and evaluate the demographic bias in English corpora may not apply to other languages. For example, Spanish has gender-dependent nouns, but this does not exist in English BIBREF2; and Portuguese varies across Brazil and Portugal in both word usage and grammar BIBREF13. The rich variations have not been explored under the fairness evaluation due to lack of multilingual corpora. Additionally, while we have hate speech detection datasets in multiple languages BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, there is still no integrated multilingual corpora that contain author demographic attributes which can be used to measure group fairness. The lack of author demographic attributes and multilingual datasets limits research for evaluating classifier fairness and developing unbiased classifiers. In this study, we combine previously published corpora labeled for Twitter hate speech recognition in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17, and publish this multilingual data augmented with author-level demographic information for four attributes: race, gender, age and country. The demographic factors are inferred from user profiles, which are independent from text documents, the tweets. To our best knowledge, this is the first multilingual hate speech corpus annotated with author attributes aiming for fairness evaluation. We start with presenting collection and inference steps of the datasets. Next, we take an exploratory study on the language variations across demographic groups on the English dataset. We then experiment with four multiple classification models to establish baseline levels of this corpus. Finally, we evaluate the fairness performance of those document classifiers. Data We assemble the annotated datasets for hate speech classification. To narrow down the data sources, we limit our dataset sources to the unique online social media site, Twitter. We have requested 16 published Twitter hate speech datasets, and finally obtained 7 of them in five languages. By using the Twitter streaming API, we collected the tweets annotated by hate speech labels and their corresponding user profiles in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17. We binarize all tweets' labels (indicating whether a tweet has indications of hate speech), allowing to merge the different label sets and reduce the data sparsity. Whether a tweet is considered hate speech heavily depends on who the speaker is; for example, whether a racial slur is intended as hate speech depends in part on the speaker's race BIBREF14. Therefore, hate speech classifiers may not generalize well across all groups of people, and disparities in the detection offensive speech could lead to bias in content moderation BIBREF21. Our contribution is to further annotate the data with user demographic attributes inferred from their public profiles, thus creating a corpus suitable for evaluating author-level fairness for this hate speech recognition task across multiple languages. Data ::: User Attribute Inference We consider four user factors of age, race, gender and geographic location. For location, we inference two granularities, country and US region, but only experiment with the country attribute. While the demographic attributes can be inferred through tweets BIBREF22, BIBREF8, we intentionally exclude the contents from the tweets if they infer these user attributes, in order to make the evaluation of fairness more reliable and independent. If users were grouped based on attributes inferred from their text, then any differences in text classification across those groups could be related to the same text. Instead, we infer attributes from public user profile information (i.e., description, name and photo). Data ::: User Attribute Inference ::: Age, Race, Gender. We infer these attributes from each user's profile image by using Face++ (https://www.faceplusplus.com/), a computer vision API that provides estimates of demographic characteristics. Empirical comparisons of facial recognition APIs have found that Face++ is the most accurate tool on Twitter data BIBREF23 and works comparatively better for darker skins BIBREF24. For the gender, we choose the binary categories (male/female) by the predicted probabilities. We map the racial outputs into four categories: Asian, Black, Latino and White. We only keep users that appear to be at least 13 years old, and we save the first result from the API if multiple faces are identified. We experiment and evaluate with binarization of race and age with roughly balanced distributions (white and nonwhite, $\le $ median vs. elder age) to consider a simplified setting across different languages, since race is harder to infer accurately. Data ::: User Attribute Inference ::: Country. The country-level language variations can bring challenges that are worth to explore. We extract geolocation information from users whose profiles contained either numerical location coordinates or a well-formatted (matching a regular expression) location name. We fed the extracted values to the Google Maps API (https://maps.googleapis.com) to obtain structured location information (city, state, country). We first count the main country source and then binarize the country to indicate if a user is in the main country or not. For example, the majority of users in the English are from the United States (US), therefore, we can binarize the country attributes to indicate if the users are in the US or not. Data ::: Corpus Summary We show the corpus statistics in Table TABREF8 and summarize the full demographic distributions in Table TABREF9. The binary demographic attributes (age, country, gender, race) can bring several benefits. First, we can create comparatively balanced label distributions. We can observe that there are differences in the race and gender among Italian and Polish data, while other attributes across the other languages show comparably balanced demographic distributions. Second, we can reduce errors inferred from the Face++ on coarse labels. Third, it is more convenient for us to analyze, conduct experiments and evaluate the group fairness of document classifiers. Table TABREF8 presents different patterns of the corpus. The Polish data has the smallest users. This is because the data focuses on the people who own the most popular accounts in the Polish data BIBREF16, the other data collected tweets randomly. And the dataset shows a much more sparse distribution of the hate speech label than the other languages. Table TABREF9 presents different patterns of the user attributes. English, Portuguese and Spanish users are younger than the Italian and Polish users in the collected data. And both Italian and Polish show more skewed demographic distributions in country, gender and race, while the other datasets show more balanced distributions. Data ::: Demographic Inference Accuracy Image-based approaches will have inaccuracies, as a person's demographic attributes cannot be conclusively determined merely from their appearance. However, given the difficulty in obtaining ground truth values, we argue that automatically inferred attributes can still be informative for studying classifier fairness. If a classifier performs significantly differently across different groups of users, then this shows that the classifier is biased along certain groupings, even if those groupings are not perfectly aligned with the actual attributes they are named after. This subsection tries to quantify how reliably these groupings correspond to the demographic variables. Prior research found that Face++ achieves 93.0% and 92.0% accuracy on gender and ethnicity evaluations BIBREF23. We further conduct a small evaluation on the hate speech corpus by a small sample of annotated user profile photos providing a rough estimate of accuracy while acknowledging that our annotations are not ground truth. We obtained the annotations from the crowdsourcing website, Figure Eight (https://figure-eight.com/). We randomly sampled 50 users whose attributes came from Face++ in each language. We anonymize the user profiles and feed the information to the crowdsourcing website. Three annotators annotated each user photo with the binary demographic categories. To select qualified annotators and ensure quality of the evaluations, we set up 5 golden standard annotation questions for each language. The annotators can join the evaluation task only by passing the golden standard questions. We decide demographic attributes by majority votes and present evaluation results in Table TABREF11. Our final evaluations show that overall the Face++ achieves averaged accuracy scores of 82.8%, 88.4% and 94.4% for age, race and gender respectively. Data ::: Privacy Considerations To facilitate the study of classification fairness, we will publicly distribute this anonymized corpus with the inferred demographic attributes including both original and binarized versions. To preserve user privacy, we will not publicize the personal profile information, including user ids, photos, geocoordinates as well as other user profile information, which were used to infer the demographic attributes. We will, however, provide inferred demographic attributes in their original formats from the Face++ and Google Maps based on per request to allow wider researchers and communities to replicate the methodology and probe more depth of fairness in document classification. Language Variations across Demographic Groups Demographic factors can improve the performances of document classifiers BIBREF25, and demographic variations root in language, especially in social media data BIBREF26, BIBREF25. For example, language styles are highly correlated with authors' demographic attributes, such as age, race, gender and location BIBREF27, BIBREF28. Research BIBREF29, BIBREF12, BIBREF30 find that biases and stereotypes exist in word embeddings, which is widely used in document classification tasks. For example, “receptionist” is closer to females while “programmer” is closer to males, and “professor” is closer to Asian Americans while “housekeeper” is closer to Hispanic Americans. This motivates us to explore and test if the language variations hold in our particular dataset, how strong the effects are. We conduct the empirical analysis of demographic predictability on the English dataset. Language Variations across Demographic Groups ::: Are Demographic Factors Predictable in Documents? We examine how accurately the documents can predict author demographic attributes from three different levels: Word-level. We extract TF-IDF-weighted 1-, 2-grams features. POS-level. We use Tweebo parser BIBREF31 to tag and extract POS features. We count the POS tag and then normalize the counts for each document. Topic-level. We train a Latent Dirichlet Allocation BIBREF32 model with 20 topics using Gensim BIBREF33 with default parameters. Then a document can be represented as a probabilistic distribution over the 20 topics. We shuffle and split data into training (70%) and test (30%) sets. Three logistic classifiers are trained by the three levels of features separately. We measure the prediction accuracy and show the absolute improvements in Figure FIGREF18. The improved prediction accuracy scores over majority baselines suggest that language variations across demographic groups are encoded in the text documents. The results show that documents are the most predictable to the age attribute. We can also observe that the word is the most predictable feature to demographic factors, while the POS feature is least predictable towards the country factor. These suggest there might be a connection between language variations and demographic groups. This motivates us to further explore the language variations based on word features. We rank the word features by mutual information classification BIBREF34 and present the top 10 unigram features in Table TABREF14. The qualitative results show the most predictable word features towards the demographic groups and suggest such variations may impact extracted feature representations and further training fair document classifiers. The Table TABREF14 shows that when classifying hate speech tweets, the n-words and b-words are more significant correlated with the white instead of the other racial groups. However, this shows an opposite view than the existing work BIBREF8, which presents the two types of words are more significantly correlated with the black. This can highlight the values of our approach that to avoid confounding errors, we obtain author demographic information independently from the user generated documents. Experiments Demographic variations root in documents, especially in social media data BIBREF26, BIBREF25, BIBREF10. Such variations could further impact the performance and fairness of document classifiers. In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37. We present the baseline results of both performance and fairness evaluations across the multilingual corpus. Experiments ::: Data Preprocessing To anonymize user information, we hash user and tweet ids and then replace hyperlinks, usernames, and hashtags with generic symbols (URL, USER, HASHTAG). Documents are lowercased and tokenized using NLTK BIBREF38. The corpus is randomly split into training (70%), development (15%), and test (15%) sets. We train the models on the training set and find the optimal hyperparameters on the development set before final evaluations on the test set. We randomly shuffle the training data at the beginning of each training epoch. Experiments ::: Baseline Models We implement and experiment four baseline classification models. To compare fairly, we keep the feature size up to 15K for each classifier across all five languages. We calculate the weight for each document category by $\frac{N}{N_l}$ BIBREF39, where $N$ is the number of documents in each language and $N_l$ is the number of documents labeled by the category. Particularly, for training BERT model, we append two additional tokens, “[CLS]” and “[SEP]”, at the start and end of each document respectively. For the neural models, we pad each document or drop rest of words up to 40 tokens. We use “unknown” as a replacement for unknown tokens. We initialize CNN and RNN classifiers by pre-trained word embeddings BIBREF40, BIBREF41, BIBREF42, BIBREF43 and train the networks up to 10 epochs. Experiments ::: Baseline Models ::: LR. We first extract TF-IDF-weighted features of uni-, bi-, and tri-grams on the corpora, using the most frequent 15K features with the minimum feature frequency as 2. We then train a LogisticRegression from scikit-learn BIBREF34. We use “liblinear” as the solver function and leave the other parameters as default. Experiments ::: Baseline Models ::: CNN. We implement the Convolutional Neural Network (CNN) classifier described in BIBREF36, BIBREF44 by Keras BIBREF45. We first apply 100 filters with three different kernel sizes, 3, 4 and 5. After the convolution operations, we feed the concatenated features to a fully connected layer and output document representations with 100 dimensions. We apply “softplus” function with a l2 regularization with $.03$ and a dropout rate with $.3$ in the dense layer. The model feeds the document representation to final prediction. We train the model with batch size 64, set model optimizer as Adam BIBREF46 and calculate loss values by the cross entropy function. We keep all other parameter settings as described in the paper BIBREF36. Experiments ::: Baseline Models ::: RNN. We build a recurrent neural network (RNN) classifier by using bi-directional Gated Recurrent Unit (bi-GRU) BIBREF35, BIBREF4. We set the output dimension of GRU as 200 and apply a dropout on the output with rate $.2$. We optimize the RNN with RMSprop BIBREF47 and use the same loss function and batch size as the CNN model. We leave the other parameters as default in the Keras BIBREF45. Experiments ::: Baseline Models ::: BERT BERT is a transformer-based pre-trained language model which was well trained on multi-billion sentences publicly available on the web BIBREF37, which can effectively generate the precise text semantics and useful signals. We implement a BERT-based classification model by HuggingFace's Transformers BIBREF48. The model encodes each document into a fixed size (768) of representation and feed to a linear prediction layer. The model is optimized by AdamW with a warmup and learning rate as $.1$ and $2e^{-5}$ respectively. We leave parameters as their default, conduct fine-tuning steps with 4 epochs and set batch size as 32 BIBREF49. The classification model loads “bert-base-uncased” pre-trained BERT model for English and “bert-base-multilingual-uncased” multilingual BERT model BIBREF50 for the other languages. The multilingual BERT model follows the same method of BERT by using Wikipedia text from the top 104 languages. Due to the label imbalance shown in Table TABREF8, we balance training instances by randomly oversampling the minority during the training process. Experiments ::: Evaluation Metrics ::: Performance Evaluation. To measure overall performance, we evaluate models by four metrics: accuracy (Acc), weighted F1 score (F1-w), macro F1 score (F1-m) and area under the ROC curve (AUC). The F1 score coherently combines both precision and recall by $2*\frac{precision*recall}{precision+recall}$. We report F1-m considering that the datasets are imbalanced. Experiments ::: Evaluation Metrics ::: Fairness Evaluation. To evaluate group fairness, we measure the equality differences (ED) of true positive/negative and false positive/negative rates for each demographic factor. ED is a standard metric to evaluate fairness and bias of document classifiers BIBREF0, BIBREF4, BIBREF5. This metric sums the differences between the rates within specific user groups and the overall rates. Taking the false positive rate (FPR) as an example, we calculate the equality difference by: , where $D$ is a demographic factor (e.g., race) and $d$ is a demographic group (e.g., white or nonwhite). Results We have presented our evaluation results of performance and fairness in Table TABREF20 and Table TABREF29 respectively. Country and race have very skewed distributions in the Italian and Polish corpora, therefore, we omit fairness evaluation on the two factors. Results ::: Overall performance evaluation. Table TABREF20 demonstrates the performances of the baseline classifiers for hate speech classification on the corpus we proposed. Results are obtained from the five languages covered in our corpus respectively. Among the four baseline classifiers, LR, CNN and RNN consistently perform well on all languages. Moreover, neural-based models (CNN and RNN) substantially outperform LR on four out of five languages (except Spanish). However, the results obtained by BERT are relatively lower than the other baselines, and show more significant gap in the English dataset. One possible explanation is BERT was pre-trained on Wikipedia documents, which are significantly different from the Twitter corpus in document length, word usage and grammars. For example, each tweet is a short document with 20 tokens, but the BERT is trained on long documents up to 512 tokens. Existing research suggests that fine-tuning on the multilingual corpus can further improve performance of BERT models BIBREF49. Results ::: Group fairness evaluation. We have measured the group fairness in Table TABREF29. Generally, the RNN classifier achieves better and more stable performance across major fairness evaluation tasks. By comparing the different baseline classifiers, we can find out that the LR usually show stronger biases than the neural classification models among majority of the tasks. While the BERT classifier performs comparatively lower accuracy and F1 scores, the classifier has less biases on the most of the datasets. However, biases can significantly increases for the Portuguese dataset when the BERT classifier achieves better performance. We examine the relationship by building linear model between two differences: the performance differences between the RNN and other classifiers, the SUM-ED differences between RNN and other classifiers. We find that the classification performance does not have significantly ($p-value > .05$) correlation with fairness and bias. The significant biases of classifiers varies across tasks and languages: the classifiers trained on Polish and Italian are biased the most by Age and Gender, the classifiers trained on Spanish and Portuguese are most biased the most by Country, and the classifiers trained on English tweets are the most unbiased throughout all the attributes. Classifiers usually have very high bias scores on both gender and age in Italian and Polish data. We find that the age and gender both have very skewed distributions in the Italian and Polish datasets. Overall, our baselines provide a promising start for evaluating future new methods of reducing demographic biases for document classification under the multilingual setting. Conclusion In this paper, we propose a new multilingual dataset covering four author demographic annotations (age, gender, race and country) for the hate speech detection task. We show the experimental results of several popular classification models in both overall and fairness performance evaluations. Our empirical exploration indicates that language variations across demographic groups can lead to biased classifiers. This dataset can be used for measuring fairness of document classifiers along author-level attributes and exploring bias factors across multilingual settings and multiple user factors. The proposed framework for inferring the author demographic attributes can be used to generate more large-scale datasets or even applied to other social media sites (e.g., Amazon and Yelp). While we encode the demographic attributes into categories in this work, we will provide inferred probabilities of the demographic attributes from Face++ to allow for broader research exploration. Our code, anonymized data and data statement BIBREF51 will be publicly available at https://github.com/xiaoleihuang/Multilingual_Fairness_LREC. Conclusion ::: Limitations While our dataset provides new information on author demographic attributes, and our analysis suggest directions toward reducing bias, a number of limitations must be acknowledged in order to appropriately interpret our findings. First, inferring user demographic attributes by profile information can be risky due to the accuracy of the inference toolkit. In this work, we present multiple strategies to reduce the errors bringing by the inference toolkits, such as human evaluation, manually screening and using external public profile information (Instagram). However, we cannot guarantee perfect accuracy of the demographic attributes, and, errors in the attributes may themselves be “unfair” or unevenly distributed due to bias in the inference tools BIBREF24. Still, obtaining individual-level attributes is an important step toward understanding classifier fairness, and our results found biases across these groupings of users, even if some of the groupings contained errors. Second, because methods for inferring demographic attributes are not accurate enough to provide fine-grained information, our attribute categories are still too coarse-grained (binary age groups and gender, and only four race categories). Using coarse-grained attributes would hide the identities of specific demographic groups, including other racial minorities and people with non-binary gender. Broadening our analyses and evaluations to include more attribute values may require better methods of user attribute inference or different sources of data. Third, language variations across demographic groups might introduce annotation biases. Existing research BIBREF52 shows that annotators are more likely to annotate tweets containing African American English words as hate speech. Additionally, the nationality and educational level might also impact on the quality of annotations BIBREF20. Similarly, different annotation sources of our dataset (which merged two different corpora) might have variations in annotating schema. To reduce annotation biases due to the different annotating schema, we merge the annotations into the two most compatible document categories: normal and hate speech. Annotation biases might still exist, therefore, we will release our original anonymized multilingual dataset for research communities. Acknowledgement The authors thank the anonymous reviews for their insightful comments and suggestions. This work was supported in part by the National Science Foundation under award number IIS-1657338. This work was also supported in part by a research gift from Adobe.
logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37
16af38f7c4774637cf8e04d4b239d6d72f0b0a3a
16af38f7c4774637cf8e04d4b239d6d72f0b0a3a_0
Q: How large is the dataset? Text: Introduction While document classification models should be objective and independent from human biases in documents, research have shown that the models can learn human biases and therefore be discriminatory towards particular demographic groups BIBREF0, BIBREF1, BIBREF2. The goal of fairness-aware document classifiers is to train and build non-discriminatory models towards people no matter what their demographic attributes are, such as gender and ethnicity. Existing research BIBREF0, BIBREF3, BIBREF4, BIBREF5, BIBREF1 in evaluating fairness of document classifiers focus on the group fairness BIBREF6, which refers to every demographic group has equal probability of being assigned to the positive predicted document category. However, the lack of original author demographic attributes and multilingual corpora bring challenges towards the fairness evaluation of document classifiers. First, the datasets commonly used to build and evaluate the fairness of document classifiers obtain derived synthetic author demographic attributes instead of the original author information. The common data sources either derive from Wikipedia toxic comments BIBREF0, BIBREF4, BIBREF5 or synthetic document templates BIBREF3, BIBREF4. The Wikipedia Talk corpus BIBREF7 provides demographic information of annotators instead of the authors, Equity Evaluation Corpus BIBREF3 are created by sentence templates and combinations of racial names and gender coreferences. While existing work BIBREF8, BIBREF9 infers user demographic information (white/black, young/old) from the text, such inference is still likely to cause confounding errors that impact and break the independence between demographic factors and the fairness evaluation of text classifiers. Second, existing research in the fairness evaluation mainly focus on only English resources, such as age biases in blog posts BIBREF9, gender biases in Wikipedia comments BIBREF0 and racial biases in hate speech detection BIBREF8. Different languages have shown different patterns of linguistic variations across the demographic attributes BIBREF10, BIBREF11, methods BIBREF12, BIBREF4 to reduce and evaluate the demographic bias in English corpora may not apply to other languages. For example, Spanish has gender-dependent nouns, but this does not exist in English BIBREF2; and Portuguese varies across Brazil and Portugal in both word usage and grammar BIBREF13. The rich variations have not been explored under the fairness evaluation due to lack of multilingual corpora. Additionally, while we have hate speech detection datasets in multiple languages BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, there is still no integrated multilingual corpora that contain author demographic attributes which can be used to measure group fairness. The lack of author demographic attributes and multilingual datasets limits research for evaluating classifier fairness and developing unbiased classifiers. In this study, we combine previously published corpora labeled for Twitter hate speech recognition in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17, and publish this multilingual data augmented with author-level demographic information for four attributes: race, gender, age and country. The demographic factors are inferred from user profiles, which are independent from text documents, the tweets. To our best knowledge, this is the first multilingual hate speech corpus annotated with author attributes aiming for fairness evaluation. We start with presenting collection and inference steps of the datasets. Next, we take an exploratory study on the language variations across demographic groups on the English dataset. We then experiment with four multiple classification models to establish baseline levels of this corpus. Finally, we evaluate the fairness performance of those document classifiers. Data We assemble the annotated datasets for hate speech classification. To narrow down the data sources, we limit our dataset sources to the unique online social media site, Twitter. We have requested 16 published Twitter hate speech datasets, and finally obtained 7 of them in five languages. By using the Twitter streaming API, we collected the tweets annotated by hate speech labels and their corresponding user profiles in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17. We binarize all tweets' labels (indicating whether a tweet has indications of hate speech), allowing to merge the different label sets and reduce the data sparsity. Whether a tweet is considered hate speech heavily depends on who the speaker is; for example, whether a racial slur is intended as hate speech depends in part on the speaker's race BIBREF14. Therefore, hate speech classifiers may not generalize well across all groups of people, and disparities in the detection offensive speech could lead to bias in content moderation BIBREF21. Our contribution is to further annotate the data with user demographic attributes inferred from their public profiles, thus creating a corpus suitable for evaluating author-level fairness for this hate speech recognition task across multiple languages. Data ::: User Attribute Inference We consider four user factors of age, race, gender and geographic location. For location, we inference two granularities, country and US region, but only experiment with the country attribute. While the demographic attributes can be inferred through tweets BIBREF22, BIBREF8, we intentionally exclude the contents from the tweets if they infer these user attributes, in order to make the evaluation of fairness more reliable and independent. If users were grouped based on attributes inferred from their text, then any differences in text classification across those groups could be related to the same text. Instead, we infer attributes from public user profile information (i.e., description, name and photo). Data ::: User Attribute Inference ::: Age, Race, Gender. We infer these attributes from each user's profile image by using Face++ (https://www.faceplusplus.com/), a computer vision API that provides estimates of demographic characteristics. Empirical comparisons of facial recognition APIs have found that Face++ is the most accurate tool on Twitter data BIBREF23 and works comparatively better for darker skins BIBREF24. For the gender, we choose the binary categories (male/female) by the predicted probabilities. We map the racial outputs into four categories: Asian, Black, Latino and White. We only keep users that appear to be at least 13 years old, and we save the first result from the API if multiple faces are identified. We experiment and evaluate with binarization of race and age with roughly balanced distributions (white and nonwhite, $\le $ median vs. elder age) to consider a simplified setting across different languages, since race is harder to infer accurately. Data ::: User Attribute Inference ::: Country. The country-level language variations can bring challenges that are worth to explore. We extract geolocation information from users whose profiles contained either numerical location coordinates or a well-formatted (matching a regular expression) location name. We fed the extracted values to the Google Maps API (https://maps.googleapis.com) to obtain structured location information (city, state, country). We first count the main country source and then binarize the country to indicate if a user is in the main country or not. For example, the majority of users in the English are from the United States (US), therefore, we can binarize the country attributes to indicate if the users are in the US or not. Data ::: Corpus Summary We show the corpus statistics in Table TABREF8 and summarize the full demographic distributions in Table TABREF9. The binary demographic attributes (age, country, gender, race) can bring several benefits. First, we can create comparatively balanced label distributions. We can observe that there are differences in the race and gender among Italian and Polish data, while other attributes across the other languages show comparably balanced demographic distributions. Second, we can reduce errors inferred from the Face++ on coarse labels. Third, it is more convenient for us to analyze, conduct experiments and evaluate the group fairness of document classifiers. Table TABREF8 presents different patterns of the corpus. The Polish data has the smallest users. This is because the data focuses on the people who own the most popular accounts in the Polish data BIBREF16, the other data collected tweets randomly. And the dataset shows a much more sparse distribution of the hate speech label than the other languages. Table TABREF9 presents different patterns of the user attributes. English, Portuguese and Spanish users are younger than the Italian and Polish users in the collected data. And both Italian and Polish show more skewed demographic distributions in country, gender and race, while the other datasets show more balanced distributions. Data ::: Demographic Inference Accuracy Image-based approaches will have inaccuracies, as a person's demographic attributes cannot be conclusively determined merely from their appearance. However, given the difficulty in obtaining ground truth values, we argue that automatically inferred attributes can still be informative for studying classifier fairness. If a classifier performs significantly differently across different groups of users, then this shows that the classifier is biased along certain groupings, even if those groupings are not perfectly aligned with the actual attributes they are named after. This subsection tries to quantify how reliably these groupings correspond to the demographic variables. Prior research found that Face++ achieves 93.0% and 92.0% accuracy on gender and ethnicity evaluations BIBREF23. We further conduct a small evaluation on the hate speech corpus by a small sample of annotated user profile photos providing a rough estimate of accuracy while acknowledging that our annotations are not ground truth. We obtained the annotations from the crowdsourcing website, Figure Eight (https://figure-eight.com/). We randomly sampled 50 users whose attributes came from Face++ in each language. We anonymize the user profiles and feed the information to the crowdsourcing website. Three annotators annotated each user photo with the binary demographic categories. To select qualified annotators and ensure quality of the evaluations, we set up 5 golden standard annotation questions for each language. The annotators can join the evaluation task only by passing the golden standard questions. We decide demographic attributes by majority votes and present evaluation results in Table TABREF11. Our final evaluations show that overall the Face++ achieves averaged accuracy scores of 82.8%, 88.4% and 94.4% for age, race and gender respectively. Data ::: Privacy Considerations To facilitate the study of classification fairness, we will publicly distribute this anonymized corpus with the inferred demographic attributes including both original and binarized versions. To preserve user privacy, we will not publicize the personal profile information, including user ids, photos, geocoordinates as well as other user profile information, which were used to infer the demographic attributes. We will, however, provide inferred demographic attributes in their original formats from the Face++ and Google Maps based on per request to allow wider researchers and communities to replicate the methodology and probe more depth of fairness in document classification. Language Variations across Demographic Groups Demographic factors can improve the performances of document classifiers BIBREF25, and demographic variations root in language, especially in social media data BIBREF26, BIBREF25. For example, language styles are highly correlated with authors' demographic attributes, such as age, race, gender and location BIBREF27, BIBREF28. Research BIBREF29, BIBREF12, BIBREF30 find that biases and stereotypes exist in word embeddings, which is widely used in document classification tasks. For example, “receptionist” is closer to females while “programmer” is closer to males, and “professor” is closer to Asian Americans while “housekeeper” is closer to Hispanic Americans. This motivates us to explore and test if the language variations hold in our particular dataset, how strong the effects are. We conduct the empirical analysis of demographic predictability on the English dataset. Language Variations across Demographic Groups ::: Are Demographic Factors Predictable in Documents? We examine how accurately the documents can predict author demographic attributes from three different levels: Word-level. We extract TF-IDF-weighted 1-, 2-grams features. POS-level. We use Tweebo parser BIBREF31 to tag and extract POS features. We count the POS tag and then normalize the counts for each document. Topic-level. We train a Latent Dirichlet Allocation BIBREF32 model with 20 topics using Gensim BIBREF33 with default parameters. Then a document can be represented as a probabilistic distribution over the 20 topics. We shuffle and split data into training (70%) and test (30%) sets. Three logistic classifiers are trained by the three levels of features separately. We measure the prediction accuracy and show the absolute improvements in Figure FIGREF18. The improved prediction accuracy scores over majority baselines suggest that language variations across demographic groups are encoded in the text documents. The results show that documents are the most predictable to the age attribute. We can also observe that the word is the most predictable feature to demographic factors, while the POS feature is least predictable towards the country factor. These suggest there might be a connection between language variations and demographic groups. This motivates us to further explore the language variations based on word features. We rank the word features by mutual information classification BIBREF34 and present the top 10 unigram features in Table TABREF14. The qualitative results show the most predictable word features towards the demographic groups and suggest such variations may impact extracted feature representations and further training fair document classifiers. The Table TABREF14 shows that when classifying hate speech tweets, the n-words and b-words are more significant correlated with the white instead of the other racial groups. However, this shows an opposite view than the existing work BIBREF8, which presents the two types of words are more significantly correlated with the black. This can highlight the values of our approach that to avoid confounding errors, we obtain author demographic information independently from the user generated documents. Experiments Demographic variations root in documents, especially in social media data BIBREF26, BIBREF25, BIBREF10. Such variations could further impact the performance and fairness of document classifiers. In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37. We present the baseline results of both performance and fairness evaluations across the multilingual corpus. Experiments ::: Data Preprocessing To anonymize user information, we hash user and tweet ids and then replace hyperlinks, usernames, and hashtags with generic symbols (URL, USER, HASHTAG). Documents are lowercased and tokenized using NLTK BIBREF38. The corpus is randomly split into training (70%), development (15%), and test (15%) sets. We train the models on the training set and find the optimal hyperparameters on the development set before final evaluations on the test set. We randomly shuffle the training data at the beginning of each training epoch. Experiments ::: Baseline Models We implement and experiment four baseline classification models. To compare fairly, we keep the feature size up to 15K for each classifier across all five languages. We calculate the weight for each document category by $\frac{N}{N_l}$ BIBREF39, where $N$ is the number of documents in each language and $N_l$ is the number of documents labeled by the category. Particularly, for training BERT model, we append two additional tokens, “[CLS]” and “[SEP]”, at the start and end of each document respectively. For the neural models, we pad each document or drop rest of words up to 40 tokens. We use “unknown” as a replacement for unknown tokens. We initialize CNN and RNN classifiers by pre-trained word embeddings BIBREF40, BIBREF41, BIBREF42, BIBREF43 and train the networks up to 10 epochs. Experiments ::: Baseline Models ::: LR. We first extract TF-IDF-weighted features of uni-, bi-, and tri-grams on the corpora, using the most frequent 15K features with the minimum feature frequency as 2. We then train a LogisticRegression from scikit-learn BIBREF34. We use “liblinear” as the solver function and leave the other parameters as default. Experiments ::: Baseline Models ::: CNN. We implement the Convolutional Neural Network (CNN) classifier described in BIBREF36, BIBREF44 by Keras BIBREF45. We first apply 100 filters with three different kernel sizes, 3, 4 and 5. After the convolution operations, we feed the concatenated features to a fully connected layer and output document representations with 100 dimensions. We apply “softplus” function with a l2 regularization with $.03$ and a dropout rate with $.3$ in the dense layer. The model feeds the document representation to final prediction. We train the model with batch size 64, set model optimizer as Adam BIBREF46 and calculate loss values by the cross entropy function. We keep all other parameter settings as described in the paper BIBREF36. Experiments ::: Baseline Models ::: RNN. We build a recurrent neural network (RNN) classifier by using bi-directional Gated Recurrent Unit (bi-GRU) BIBREF35, BIBREF4. We set the output dimension of GRU as 200 and apply a dropout on the output with rate $.2$. We optimize the RNN with RMSprop BIBREF47 and use the same loss function and batch size as the CNN model. We leave the other parameters as default in the Keras BIBREF45. Experiments ::: Baseline Models ::: BERT BERT is a transformer-based pre-trained language model which was well trained on multi-billion sentences publicly available on the web BIBREF37, which can effectively generate the precise text semantics and useful signals. We implement a BERT-based classification model by HuggingFace's Transformers BIBREF48. The model encodes each document into a fixed size (768) of representation and feed to a linear prediction layer. The model is optimized by AdamW with a warmup and learning rate as $.1$ and $2e^{-5}$ respectively. We leave parameters as their default, conduct fine-tuning steps with 4 epochs and set batch size as 32 BIBREF49. The classification model loads “bert-base-uncased” pre-trained BERT model for English and “bert-base-multilingual-uncased” multilingual BERT model BIBREF50 for the other languages. The multilingual BERT model follows the same method of BERT by using Wikipedia text from the top 104 languages. Due to the label imbalance shown in Table TABREF8, we balance training instances by randomly oversampling the minority during the training process. Experiments ::: Evaluation Metrics ::: Performance Evaluation. To measure overall performance, we evaluate models by four metrics: accuracy (Acc), weighted F1 score (F1-w), macro F1 score (F1-m) and area under the ROC curve (AUC). The F1 score coherently combines both precision and recall by $2*\frac{precision*recall}{precision+recall}$. We report F1-m considering that the datasets are imbalanced. Experiments ::: Evaluation Metrics ::: Fairness Evaluation. To evaluate group fairness, we measure the equality differences (ED) of true positive/negative and false positive/negative rates for each demographic factor. ED is a standard metric to evaluate fairness and bias of document classifiers BIBREF0, BIBREF4, BIBREF5. This metric sums the differences between the rates within specific user groups and the overall rates. Taking the false positive rate (FPR) as an example, we calculate the equality difference by: , where $D$ is a demographic factor (e.g., race) and $d$ is a demographic group (e.g., white or nonwhite). Results We have presented our evaluation results of performance and fairness in Table TABREF20 and Table TABREF29 respectively. Country and race have very skewed distributions in the Italian and Polish corpora, therefore, we omit fairness evaluation on the two factors. Results ::: Overall performance evaluation. Table TABREF20 demonstrates the performances of the baseline classifiers for hate speech classification on the corpus we proposed. Results are obtained from the five languages covered in our corpus respectively. Among the four baseline classifiers, LR, CNN and RNN consistently perform well on all languages. Moreover, neural-based models (CNN and RNN) substantially outperform LR on four out of five languages (except Spanish). However, the results obtained by BERT are relatively lower than the other baselines, and show more significant gap in the English dataset. One possible explanation is BERT was pre-trained on Wikipedia documents, which are significantly different from the Twitter corpus in document length, word usage and grammars. For example, each tweet is a short document with 20 tokens, but the BERT is trained on long documents up to 512 tokens. Existing research suggests that fine-tuning on the multilingual corpus can further improve performance of BERT models BIBREF49. Results ::: Group fairness evaluation. We have measured the group fairness in Table TABREF29. Generally, the RNN classifier achieves better and more stable performance across major fairness evaluation tasks. By comparing the different baseline classifiers, we can find out that the LR usually show stronger biases than the neural classification models among majority of the tasks. While the BERT classifier performs comparatively lower accuracy and F1 scores, the classifier has less biases on the most of the datasets. However, biases can significantly increases for the Portuguese dataset when the BERT classifier achieves better performance. We examine the relationship by building linear model between two differences: the performance differences between the RNN and other classifiers, the SUM-ED differences between RNN and other classifiers. We find that the classification performance does not have significantly ($p-value > .05$) correlation with fairness and bias. The significant biases of classifiers varies across tasks and languages: the classifiers trained on Polish and Italian are biased the most by Age and Gender, the classifiers trained on Spanish and Portuguese are most biased the most by Country, and the classifiers trained on English tweets are the most unbiased throughout all the attributes. Classifiers usually have very high bias scores on both gender and age in Italian and Polish data. We find that the age and gender both have very skewed distributions in the Italian and Polish datasets. Overall, our baselines provide a promising start for evaluating future new methods of reducing demographic biases for document classification under the multilingual setting. Conclusion In this paper, we propose a new multilingual dataset covering four author demographic annotations (age, gender, race and country) for the hate speech detection task. We show the experimental results of several popular classification models in both overall and fairness performance evaluations. Our empirical exploration indicates that language variations across demographic groups can lead to biased classifiers. This dataset can be used for measuring fairness of document classifiers along author-level attributes and exploring bias factors across multilingual settings and multiple user factors. The proposed framework for inferring the author demographic attributes can be used to generate more large-scale datasets or even applied to other social media sites (e.g., Amazon and Yelp). While we encode the demographic attributes into categories in this work, we will provide inferred probabilities of the demographic attributes from Face++ to allow for broader research exploration. Our code, anonymized data and data statement BIBREF51 will be publicly available at https://github.com/xiaoleihuang/Multilingual_Fairness_LREC. Conclusion ::: Limitations While our dataset provides new information on author demographic attributes, and our analysis suggest directions toward reducing bias, a number of limitations must be acknowledged in order to appropriately interpret our findings. First, inferring user demographic attributes by profile information can be risky due to the accuracy of the inference toolkit. In this work, we present multiple strategies to reduce the errors bringing by the inference toolkits, such as human evaluation, manually screening and using external public profile information (Instagram). However, we cannot guarantee perfect accuracy of the demographic attributes, and, errors in the attributes may themselves be “unfair” or unevenly distributed due to bias in the inference tools BIBREF24. Still, obtaining individual-level attributes is an important step toward understanding classifier fairness, and our results found biases across these groupings of users, even if some of the groupings contained errors. Second, because methods for inferring demographic attributes are not accurate enough to provide fine-grained information, our attribute categories are still too coarse-grained (binary age groups and gender, and only four race categories). Using coarse-grained attributes would hide the identities of specific demographic groups, including other racial minorities and people with non-binary gender. Broadening our analyses and evaluations to include more attribute values may require better methods of user attribute inference or different sources of data. Third, language variations across demographic groups might introduce annotation biases. Existing research BIBREF52 shows that annotators are more likely to annotate tweets containing African American English words as hate speech. Additionally, the nationality and educational level might also impact on the quality of annotations BIBREF20. Similarly, different annotation sources of our dataset (which merged two different corpora) might have variations in annotating schema. To reduce annotation biases due to the different annotating schema, we merge the annotations into the two most compatible document categories: normal and hate speech. Annotation biases might still exist, therefore, we will release our original anonymized multilingual dataset for research communities. Acknowledgement The authors thank the anonymous reviews for their insightful comments and suggestions. This work was supported in part by the National Science Foundation under award number IIS-1657338. This work was also supported in part by a research gift from Adobe.
over 104k documents
e9209ebf38c4ae4d93884f68c7b5b3444e0604f3
e9209ebf38c4ae4d93884f68c7b5b3444e0604f3_0
Q: What evaluation metrics are used to measure diversity? Text: Introduction Conventional spoken dialogue systems (SDS) require a substantial amount of hand-crafted rules to achieve good interaction with users. The large amount of required engineering limits the scalability of these systems to settings with new or multiple domains. Recently, statistical approaches have been studied that allow natural, efficient and more diverse interaction with users without depending on pre-defined rules BIBREF0 , BIBREF1 , BIBREF2 . Natural language generation (NLG) is an essential component of an SDS. Given a semantic representation (SR) consisting of a dialogue act and a set of slot-value pairs, the generator should produce natural language containing the desired information. Traditionally NLG was based on templates BIBREF3 , which produce grammatically-correct sentences that contain all desired information. However, the lack of variation of these sentences made these systems seem tedious and monotonic. Trainable generators BIBREF4 , BIBREF5 can generate several sentences for the same SR, but the dependence on pre-defined operations limits their potential. Corpus-based approaches BIBREF6 , BIBREF7 learn to generate natural language directly from data without pre-defined rules. However, they usually require alignment between the sentence and the SR. Recently, Wen et al. wensclstm15 proposed an RNN-based approach, which outperformed previous methods on several metrics. However, the generated sentences often did not include all desired attributes. The variational autoencoder BIBREF8 enabled for the first time the generation of complicated, high-dimensional data such as images. The conditional variational autoencoder (CVAE) BIBREF9 , firstly proposed for image generation, has a similar structure to the VAE with an additional dependency on a condition. Recently, the CVAE has been applied to dialogue systems BIBREF10 , BIBREF11 , BIBREF12 using the previous dialogue turns as the condition. However, their output was not required to contain specific information. In this paper, we improve RNN-based generators by adapting the CVAE to the difficult task of cross-domain NLG. Due to the additional latent information encoded by the CVAE, our model outperformed the SCLSTM at conveying all information. Furthermore, our model reaches better results when the training data is limited. Variational Autoencoder The VAE is a generative latent variable model. It uses a neural network (NN) to generate $\hat{x}$ from a latent variable $z$ , which is sampled from the prior $p_{\theta }(z)$ . The VAE is trained such that $\hat{x}$ is a sample of the distribution $p_{D}(x)$ from which the training data was collected. Generative latent variable models have the form $p_{\theta }(x)=\int _{z}p_{\theta }(x|z)p_{\theta }(z) dz$ . In a VAE an NN, called the decoder, models $p_{\theta }(x|z)$ and would ideally be trained to maximize the expectation of the above integral $E\left[p_{\theta }(x)\right]$ . Since this is intractable, the VAE uses another NN, called the encoder, to model $q_{\phi }(z|x)$ which should approximate the posterior $p_{\theta }(z|x)$ . The NNs in the VAE are trained to maximise the variational lower bound (VLB) to $z$0 , which is given by: $$\begin{aligned} L_{VAE}(\theta , \phi ; x) = -KL(q_{\phi }(z|x)||p_{\theta }(z)) \\ + E_{q_{\phi }(z|x)}[\log p_{\theta }(x|z)] \end{aligned}$$ (Eq. 2) The first term is the KL-divergence between the approximated posterior and the prior, which encourages similarity between the two distributions. The second term is the likelihood of the data given samples from the approximated posterior. The CVAE has a similar structure, but the prior is modelled by another NN, called the prior network. The prior network is conditioned on $c$ . The new objective function can now be written as: $$L_{CVAE}(\theta , \phi ; x, c) = -KL(q_{\phi }(z|x, c)||p_{\theta }(z|c)) \\ + E_{q_{\phi }(z|x,c)}[\log p_{\theta }(x|z,c)]$$ (Eq. 3) When generating data, the encoder is not used and $z$ is sampled from $p_{\theta }(z|c)$ . Semantically Conditioned VAE The structure of our model is depicted in Fig. 1 , which, conditioned on an SR, generates the system's word-level response $x$ . An SR consists of three components: the domain, a dialogue act and a set of slot-value pairs. Slots are attributes required to appear in $x$ (e.g. a hotel's area). A slot can have a value. Then the two are called a slot-value pair (e.g. area=north). $x$ is delexicalised, which means that slot values are replaced by corresponding slot tokens. The condition $c$ of our model is the SR represented as two 1-hot vectors for the domain and the dialogue act as well as a binary vector for the slots. During training, $x$ is first passed through a single layer bi-directional LSTM, the output of which is concatenated with $c$ and passed to the recognition network. The recognition network parametrises a Gaussian distribution $\mathcal {N}(\mu _{post}, \sigma _{post})$ which is the posterior.The prior network only has $c$ as its input and parametrises a Gaussian distribution $\mathcal {N}(\mu _{prior}, \sigma _{prior})$ which is the prior. Both networks are fully-connected (FC) NNs with one and two layers respectively. During training, $z$ is sampled from the posterior. When the model is used for generation, $x$0 is sampled from the prior. The decoder is an SCLSTM BIBREF13 using $x$1 as its initial hidden state and initial cell vector. The first input to the SCLSTM is a start-of-sentence (sos) token and the model generates words until it outputs an end-of-sentence (eos) token. Optimization When the decoder in the CVAE is powerful on its own, it tends to ignore the latent variable $z$ since the encoder fails to encode enough information into $z$ . Regularization methods can be introduced in order to push the encoder towards learning a good representation of the latent variable $z$ . Since the KL-component of the VLB does not contribute towards learning a meaningful $z$ , increasing the weight of it gradually from 0 to 1 during training helps to encode a better representation in $z$ . This method is termed KL-annealing BIBREF14 . In addition, inspired by BIBREF12 , we introduce a regularization method using another NN which is trained to use $z$ to recover the condition $c$ . The NN is split into three separate FC NNs of one layer each, which independently recover the domain, dialogue-act and slots components of $c$ . The objective of our model can be written as: $$L_{SCVAE}(\theta , \phi ;x, c) = L_{CVAE}(\theta , \phi ;x, c) \\ + E_{q_{\phi }(z|x, c)}[\log p(x_{D}|z)+\log p(x_{A}|z)+ \\ \log \prod _{i=1}^{|S|} p(x_{S_{i}}|z)]$$ (Eq. 7) where $x_{D}$ is the domain label, $x_{A}$ is the dialogue act label and $x_{S_{i}}$ are the slot labels with $|S|$ slots in the SR. In the proposed model, the CVAE learns to encode information about both the sentence and the SR into $z$ . Using $z$ as its initial state, the decoder is better at generating sentences with desired attributes. In section "Visualization of Latent Variable zz" a visualization of the latent space demonstrates that a semantically meaningful representation for $z$ was learned. Dataset and Setup The proposed model is used for an SDS that provides information about restaurants, hotels, televisions and laptops. It is trained on a dataset BIBREF15 , which consists of sentences with corresponding semantic representations. Table 1 shows statistics about the corpus which was split into a training, validation and testing set according to a 3:1:1 split. The dataset contains 14 different system dialogue acts. The television and laptop domains are much more complex than other domains. There are around 7k and 13k different SRs possible for the TV and the laptop domain respectively. For the restaurant and hotel domains only 248 and 164 unique SRs are possible. This imbalance makes the NLG task more difficult. The generators were implemented using the PyTorch Library BIBREF16 . The size of decoder SCLSTM and thus of the latent variable was set to 128. KL-annealing was used, with the weight of the KL-loss reaching 1 after 5k mini-batch updates. The slot error rate (ERR), used in BIBREF6 , BIBREF17 , is the metric that measures the model's ability to convey the desired information. ERR is defined as: $(p+q)/N$ , where $N$ is the number of slots in the SR, $p$ and $q$ are the number of missing and redundant slots in the generated sentence. The BLEU-4 metric and perplexity (PPL) are also reported. The baseline SCLSTM is optimized, which has shown to outperform template-based methods and trainable generators BIBREF13 . NLG often uses the over-generation and reranking paradigm BIBREF6 . The SCVAE can generate multiple sentences by sampling multiple $z$ , while the SCLSTM has to sample different words from the output distribution.In our experiments ten sentences are generated per SR. Table 4 in the appendix shows one SR in each domain with five illustrative sentences generated by our model. Visualization of Latent Variable zz 2D-projections of $z$ for each data point in the test set are shown in Fig. 2 , by using PCA for dimensionality reduction. In Fig. 2 a, data points of the restaurant, hotel, TV and laptop domain are marked as blue, green, red and yellow respectively. As can be seen, data points from the laptop domain are contained within four distinct clusters. In addition, there is a large overlap of the TV and laptop domains, which is not surprising as they share all dialogue acts (DAs). Similarly, there is overlap of the restaurant and hotel domains. In Fig. 2 b, the eight most frequent DAs are color-coded. recommend, depicted as green, has a similar distribution to the laptop domain in Fig. 2 a, since recommend happens mostly in the laptop domain. This suggests that our model learns to map similar SRs into close regions within the latent space. Therefore, $z$ contains meaningful information in regards to the domain, DAs and slots. Empirical Comparison Table 2 shows the comparison between SCVAE and SCLSTM. Both are trained on the full cross-domain dataset, and tested on the four domains individually. The SCVAE outperforms the SCLSTM on all metrics. For the highly complex TV and laptop domains, the SCVAE leads to dramatic improvements in ERR. This shows that the additional sentence level conditioning through $z$ helps to convey all desired attributes. Fig. 3 shows BLEU and ERR results when the SCVAE and SCLSTM are trained on varying amounts of data. The SCVAE has a lower ERR than the SCLSTM across the varying amounts of training data. For very slow amounts of data the SCVAE outperforms the SCLSTM even more. In addition, our model consistently achieves better results on the BLEU metric. For the K-shot learning experiments, we trained the model using all training examples from three domains and only 300 examples from the target domain. The target domain is the domain we test on. As seen from Table 3 , the SCVAE outperforms the SCLSTM in all domains except hotel. This might be because the hotel domain is the simplest and the model does not need to rely on the knowledge from other domains. The SCVAE strongly outperforms the SCLSTM for the complex TV and laptop domains where the number of distinct SRs is large. This suggests that the SCVAE is better at transferring knowledge between domains. Conclusion In this paper, we propose a semantically conditioned variational autoencoder (SCVAE) for natural language generation. The SCVAE encodes information about both the semantic representation and the sentence into a latent variable $z$ . Due to a newly proposed regularization method, the latent variable $z$ contains semantically meaningful information. Therefore, conditioning on $z$ leads to a strong improvement in generating sentences with all desired attributes. In an extensive comparison the SCVAE outperforms the SCLSTM on a range of metrics when training on different sizes of data and for K-short learning. Especially, when testing the ability to convey all desired information within complex domains, the SCVAE shows significantly better results. Acknowledgments Bo-Hsiang Tseng is supported by Cambridge Trust and the Ministry of Education, Taiwan. This research was partly funded by the EPSRC grant EP/M018946/1 Open Domain Statistical Spoken Dialogue Systems. Florian Kreyssig is supported by the Studienstiftung des Deutschen Volkes. Paweł Budzianowski is supported by the EPSRC and Toshiba Research Europe Ltd.
Unanswerable
4319a13a6c4a9494ccb465509c9d4265f63dc9b5
4319a13a6c4a9494ccb465509c9d4265f63dc9b5_0
Q: How is some information lost in the RNN-based generation models? Text: Introduction Conventional spoken dialogue systems (SDS) require a substantial amount of hand-crafted rules to achieve good interaction with users. The large amount of required engineering limits the scalability of these systems to settings with new or multiple domains. Recently, statistical approaches have been studied that allow natural, efficient and more diverse interaction with users without depending on pre-defined rules BIBREF0 , BIBREF1 , BIBREF2 . Natural language generation (NLG) is an essential component of an SDS. Given a semantic representation (SR) consisting of a dialogue act and a set of slot-value pairs, the generator should produce natural language containing the desired information. Traditionally NLG was based on templates BIBREF3 , which produce grammatically-correct sentences that contain all desired information. However, the lack of variation of these sentences made these systems seem tedious and monotonic. Trainable generators BIBREF4 , BIBREF5 can generate several sentences for the same SR, but the dependence on pre-defined operations limits their potential. Corpus-based approaches BIBREF6 , BIBREF7 learn to generate natural language directly from data without pre-defined rules. However, they usually require alignment between the sentence and the SR. Recently, Wen et al. wensclstm15 proposed an RNN-based approach, which outperformed previous methods on several metrics. However, the generated sentences often did not include all desired attributes. The variational autoencoder BIBREF8 enabled for the first time the generation of complicated, high-dimensional data such as images. The conditional variational autoencoder (CVAE) BIBREF9 , firstly proposed for image generation, has a similar structure to the VAE with an additional dependency on a condition. Recently, the CVAE has been applied to dialogue systems BIBREF10 , BIBREF11 , BIBREF12 using the previous dialogue turns as the condition. However, their output was not required to contain specific information. In this paper, we improve RNN-based generators by adapting the CVAE to the difficult task of cross-domain NLG. Due to the additional latent information encoded by the CVAE, our model outperformed the SCLSTM at conveying all information. Furthermore, our model reaches better results when the training data is limited. Variational Autoencoder The VAE is a generative latent variable model. It uses a neural network (NN) to generate $\hat{x}$ from a latent variable $z$ , which is sampled from the prior $p_{\theta }(z)$ . The VAE is trained such that $\hat{x}$ is a sample of the distribution $p_{D}(x)$ from which the training data was collected. Generative latent variable models have the form $p_{\theta }(x)=\int _{z}p_{\theta }(x|z)p_{\theta }(z) dz$ . In a VAE an NN, called the decoder, models $p_{\theta }(x|z)$ and would ideally be trained to maximize the expectation of the above integral $E\left[p_{\theta }(x)\right]$ . Since this is intractable, the VAE uses another NN, called the encoder, to model $q_{\phi }(z|x)$ which should approximate the posterior $p_{\theta }(z|x)$ . The NNs in the VAE are trained to maximise the variational lower bound (VLB) to $z$0 , which is given by: $$\begin{aligned} L_{VAE}(\theta , \phi ; x) = -KL(q_{\phi }(z|x)||p_{\theta }(z)) \\ + E_{q_{\phi }(z|x)}[\log p_{\theta }(x|z)] \end{aligned}$$ (Eq. 2) The first term is the KL-divergence between the approximated posterior and the prior, which encourages similarity between the two distributions. The second term is the likelihood of the data given samples from the approximated posterior. The CVAE has a similar structure, but the prior is modelled by another NN, called the prior network. The prior network is conditioned on $c$ . The new objective function can now be written as: $$L_{CVAE}(\theta , \phi ; x, c) = -KL(q_{\phi }(z|x, c)||p_{\theta }(z|c)) \\ + E_{q_{\phi }(z|x,c)}[\log p_{\theta }(x|z,c)]$$ (Eq. 3) When generating data, the encoder is not used and $z$ is sampled from $p_{\theta }(z|c)$ . Semantically Conditioned VAE The structure of our model is depicted in Fig. 1 , which, conditioned on an SR, generates the system's word-level response $x$ . An SR consists of three components: the domain, a dialogue act and a set of slot-value pairs. Slots are attributes required to appear in $x$ (e.g. a hotel's area). A slot can have a value. Then the two are called a slot-value pair (e.g. area=north). $x$ is delexicalised, which means that slot values are replaced by corresponding slot tokens. The condition $c$ of our model is the SR represented as two 1-hot vectors for the domain and the dialogue act as well as a binary vector for the slots. During training, $x$ is first passed through a single layer bi-directional LSTM, the output of which is concatenated with $c$ and passed to the recognition network. The recognition network parametrises a Gaussian distribution $\mathcal {N}(\mu _{post}, \sigma _{post})$ which is the posterior.The prior network only has $c$ as its input and parametrises a Gaussian distribution $\mathcal {N}(\mu _{prior}, \sigma _{prior})$ which is the prior. Both networks are fully-connected (FC) NNs with one and two layers respectively. During training, $z$ is sampled from the posterior. When the model is used for generation, $x$0 is sampled from the prior. The decoder is an SCLSTM BIBREF13 using $x$1 as its initial hidden state and initial cell vector. The first input to the SCLSTM is a start-of-sentence (sos) token and the model generates words until it outputs an end-of-sentence (eos) token. Optimization When the decoder in the CVAE is powerful on its own, it tends to ignore the latent variable $z$ since the encoder fails to encode enough information into $z$ . Regularization methods can be introduced in order to push the encoder towards learning a good representation of the latent variable $z$ . Since the KL-component of the VLB does not contribute towards learning a meaningful $z$ , increasing the weight of it gradually from 0 to 1 during training helps to encode a better representation in $z$ . This method is termed KL-annealing BIBREF14 . In addition, inspired by BIBREF12 , we introduce a regularization method using another NN which is trained to use $z$ to recover the condition $c$ . The NN is split into three separate FC NNs of one layer each, which independently recover the domain, dialogue-act and slots components of $c$ . The objective of our model can be written as: $$L_{SCVAE}(\theta , \phi ;x, c) = L_{CVAE}(\theta , \phi ;x, c) \\ + E_{q_{\phi }(z|x, c)}[\log p(x_{D}|z)+\log p(x_{A}|z)+ \\ \log \prod _{i=1}^{|S|} p(x_{S_{i}}|z)]$$ (Eq. 7) where $x_{D}$ is the domain label, $x_{A}$ is the dialogue act label and $x_{S_{i}}$ are the slot labels with $|S|$ slots in the SR. In the proposed model, the CVAE learns to encode information about both the sentence and the SR into $z$ . Using $z$ as its initial state, the decoder is better at generating sentences with desired attributes. In section "Visualization of Latent Variable zz" a visualization of the latent space demonstrates that a semantically meaningful representation for $z$ was learned. Dataset and Setup The proposed model is used for an SDS that provides information about restaurants, hotels, televisions and laptops. It is trained on a dataset BIBREF15 , which consists of sentences with corresponding semantic representations. Table 1 shows statistics about the corpus which was split into a training, validation and testing set according to a 3:1:1 split. The dataset contains 14 different system dialogue acts. The television and laptop domains are much more complex than other domains. There are around 7k and 13k different SRs possible for the TV and the laptop domain respectively. For the restaurant and hotel domains only 248 and 164 unique SRs are possible. This imbalance makes the NLG task more difficult. The generators were implemented using the PyTorch Library BIBREF16 . The size of decoder SCLSTM and thus of the latent variable was set to 128. KL-annealing was used, with the weight of the KL-loss reaching 1 after 5k mini-batch updates. The slot error rate (ERR), used in BIBREF6 , BIBREF17 , is the metric that measures the model's ability to convey the desired information. ERR is defined as: $(p+q)/N$ , where $N$ is the number of slots in the SR, $p$ and $q$ are the number of missing and redundant slots in the generated sentence. The BLEU-4 metric and perplexity (PPL) are also reported. The baseline SCLSTM is optimized, which has shown to outperform template-based methods and trainable generators BIBREF13 . NLG often uses the over-generation and reranking paradigm BIBREF6 . The SCVAE can generate multiple sentences by sampling multiple $z$ , while the SCLSTM has to sample different words from the output distribution.In our experiments ten sentences are generated per SR. Table 4 in the appendix shows one SR in each domain with five illustrative sentences generated by our model. Visualization of Latent Variable zz 2D-projections of $z$ for each data point in the test set are shown in Fig. 2 , by using PCA for dimensionality reduction. In Fig. 2 a, data points of the restaurant, hotel, TV and laptop domain are marked as blue, green, red and yellow respectively. As can be seen, data points from the laptop domain are contained within four distinct clusters. In addition, there is a large overlap of the TV and laptop domains, which is not surprising as they share all dialogue acts (DAs). Similarly, there is overlap of the restaurant and hotel domains. In Fig. 2 b, the eight most frequent DAs are color-coded. recommend, depicted as green, has a similar distribution to the laptop domain in Fig. 2 a, since recommend happens mostly in the laptop domain. This suggests that our model learns to map similar SRs into close regions within the latent space. Therefore, $z$ contains meaningful information in regards to the domain, DAs and slots. Empirical Comparison Table 2 shows the comparison between SCVAE and SCLSTM. Both are trained on the full cross-domain dataset, and tested on the four domains individually. The SCVAE outperforms the SCLSTM on all metrics. For the highly complex TV and laptop domains, the SCVAE leads to dramatic improvements in ERR. This shows that the additional sentence level conditioning through $z$ helps to convey all desired attributes. Fig. 3 shows BLEU and ERR results when the SCVAE and SCLSTM are trained on varying amounts of data. The SCVAE has a lower ERR than the SCLSTM across the varying amounts of training data. For very slow amounts of data the SCVAE outperforms the SCLSTM even more. In addition, our model consistently achieves better results on the BLEU metric. For the K-shot learning experiments, we trained the model using all training examples from three domains and only 300 examples from the target domain. The target domain is the domain we test on. As seen from Table 3 , the SCVAE outperforms the SCLSTM in all domains except hotel. This might be because the hotel domain is the simplest and the model does not need to rely on the knowledge from other domains. The SCVAE strongly outperforms the SCLSTM for the complex TV and laptop domains where the number of distinct SRs is large. This suggests that the SCVAE is better at transferring knowledge between domains. Conclusion In this paper, we propose a semantically conditioned variational autoencoder (SCVAE) for natural language generation. The SCVAE encodes information about both the semantic representation and the sentence into a latent variable $z$ . Due to a newly proposed regularization method, the latent variable $z$ contains semantically meaningful information. Therefore, conditioning on $z$ leads to a strong improvement in generating sentences with all desired attributes. In an extensive comparison the SCVAE outperforms the SCLSTM on a range of metrics when training on different sizes of data and for K-short learning. Especially, when testing the ability to convey all desired information within complex domains, the SCVAE shows significantly better results. Acknowledgments Bo-Hsiang Tseng is supported by Cambridge Trust and the Ministry of Education, Taiwan. This research was partly funded by the EPSRC grant EP/M018946/1 Open Domain Statistical Spoken Dialogue Systems. Florian Kreyssig is supported by the Studienstiftung des Deutschen Volkes. Paweł Budzianowski is supported by the EPSRC and Toshiba Research Europe Ltd.
the generated sentences often did not include all desired attributes.
5be62428f973a08c303c66018b081ad140c559c8
5be62428f973a08c303c66018b081ad140c559c8_0
Q: What is the model accuracy? Text: Introduction While social media sites provide users with the revolutionized communication medium by bringing the communication efficiency to a new level, they can be easily misused for widely spreading misinformation and fake news. Fake news and misinformation have been a long-standing issue for various purposes such as political propaganda BIBREF0 and financial propaganda BIBREF1. To fight against fake news, traditional publishers employed human editors to manually and carefully check the content of news articles to maintain their reputation. However, social media provided a new way to spread news, which lead to broader information sources and expanded audience (i.e., anyone can be a media and create news). In particular, users share news articles with their own opinion or read articles shared by their friends from whatever the source of news is with mostly blind trust BIBREF2 or with their own ideologies BIBREF3, BIBREF4. Although social media posts usually have a very short life cycle, the unprecedented amount of fake news may lead to a catastrophic impact on both individuals and society. Besides from misleading users with false information BIBREF4, widely propagated fake news could even cause trust crisis of entire news ecosystem BIBREF5, even further affecting both the cyberspace and physical space. In literature, researchers focused on four topics regarding fake news: characterization (i.e., types of fake news), motivation, circulation, and countermeasures BIBREF6, BIBREF7. A large body of work has been done on fake news identification BIBREF5, BIBREF8, BIBREF9, BIBREF10 by exploiting multiple content-related and social-related components. However, we notice that the fake news still has been widely spread even after early detection BIBREF11. Therefore, we propose to study a complementary approach to mitigate the spread and impact of fake news. Recently, community and journalists started building and maintaining fact-checking websites (e.g., Snopes.com). Social media users called fact-checkers also started using these fact-checking pages as factual evidences to debunk fake news by replying to fake news posters. Figure FIGREF1 demonstrates a real-world example of a fact-checker's fact-checking behavior on Twitter by debunking another user's false claim with a Snopes page URL as an evidence to support the factual correction. In BIBREF12, researchers found that these fact-checkers actively debunked fake news mostly within one day, and their replies were exposed to hundreds of millions users. To motivate these fact-checkers further quickly engage with fake news posters and intelligently consume increased volume of fact-checking articles, in this paper we propose a novel personalized fact-checking URL recommender system. According to BIBREF13, co-occurrence matrix within the given context provides information of semantic similarity between two objects. Therefore, in our proposed deep-learning based recommender system, we employ two extended matrices: user-user co-occurrence matrix, and URL-URL co-occurrence matrix to facilitate our recommendation. In addition, users tend to form relationships with like-minded people BIBREF14. Therefore, we incorporate each user's social context to capture the semantic relation to enhance the recommendation performance. Our main contributions are summarized as follows: We propose a new framework for personalized fact-checking URL recommendation, which relies on multi-relational context neighbors. We propose two attention mechanisms which allow for learning deep semantic representation of both a target user and a target URL at different granularity. Experimental results show that our proposed model outperforms eight state-of-the-art baselines, covering various types of recommendation approaches. Ablation study confirm the effectiveness of each component in our proposed framework. Related Works In this section, we briefly review related works and position our work within the following areas: (1) fake news and misinformation; (2) advancements in recommender systems; and (3) graph convolutional networks. Related Works ::: Fake News and Misinformation Fake news has attracted considerable attention since it is related to our daily life and has become a serious problem related to multiple areas such as politics BIBREF0 and finance BIBREF1. Social media sites have become one of popular mediums to propagate fake news and misinformation. The dominant line of work in this topic is fake news detection BIBREF15 which was mostly formulated as a binary classification problem. Researchers began to incorporate social context and other features for identifying fake news at an early stage and preventing it from diffusion on the social network BIBREF5, BIBREF7. Some other researchers focus on investigating the propagation patterns of fake news in social network BIBREF16, BIBREF17. BIBREF18 also studied fake news intervention. Unlike most previous works, we follow the direction of BIBREF12 and propose to build a personalized recommender system for promoting the fact-checking article circulation to debunk fake news. Related Works ::: Advancements in Recommender System Traditionally, recommendation algorithms can be divided into two categories: collaborative filtering BIBREF19 and content-based filtering. However, in the past few years, the recommendation has become a more integrated task due to the success of the deep neural network. Neural Networks (NNs) proves to be effective to capture underlying nonlinear relations BIBREF20. Another advantage is that the NNs enhanced the model's capability of extracting knowledge from multimodal data BIBREF21, BIBREF22, BIBREF23, which serves as auxiliary information and provide solutions to address the data sparsity problem. More recently, researchers introduced attention mechanism into recommender systems, which has achieved great success in various fields BIBREF24, BIBREF25. Researchers developed multiple variants of attention mechanism to improve both the recommendation precision and model interpretability BIBREF26, BIBREF27, BIBREF28, BIBREF29. In this paper, we also propose two novel designs of attention mechanism. Following BIBREF30, BIBREF31, we further explore multi-relational context of given user-URL pair, aiming at discriminating the most important elements towards URL-dependent user preference. Related Works ::: Graph Convolutional Networks With the surge of Graph-based Neural Network, GCN-based approaches have shown strong effectiveness on various tasksBIBREF32, BIBREF33, BIBREF34, including recommender system. The core idea is to iteratively aggregate attributed node vectors around each node, and messages propagates by stacking multiple layers. However, the original design of GCN is not suitable for our scenario because of the following reasons: First, existing GCN works BIBREF33, BIBREF34 do not distinguish different types of nodes, whereas in our case, it does not make sense to aggregate user and URL nodes together. And the aggregation function proposed in most GCN works treats all its adjacency nodes with the same importance. It is inappropriate in real-world applications and probably tends to neglect necessary information. BIBREF35 breaks this schema by using a multi-head attention mechanism to replace the convolution-like operator, yet it requires significant extra computation and memory. Compared to the previous works, in this paper, we focus on a novel application and investigate both co-occurrence context and social context related influences for fact-checking URL recommendation. We also incorporate sets of auxiliary attributes, which enable more comprehensive learning of the compatibility between given pairs of user and URL. Moreover, we take advantage of advancements in graph neural networks and attention mechanisms, and solve the aforementioned research problems. Problem Formulation We formally introduce definitions before describing our proposed framework. We define fact-checking behavior as a user (i.e., fact-checker) embeds a fact-checking URL in his reply in order to debunk fake news. We regard each fact-checking behavior as an implicit interaction between target user $i$ and target URL $j$. Problem Formulation ::: Definition 1 (Fact-checking URL Recommendation Task) Let $\mathcal {U} = \lbrace u_1,u_2,...,u_n\rbrace $ denotes a set of fact-checkers on social media, and use $\mathcal {C} = \lbrace c_1,c_2,...,c_m\rbrace $ to index fact-checking URLs. We construct user-URL interaction matrix $Y = \lbrace y_{ij} | u\in \mathcal {U}, v \in \mathcal {C} \rbrace $ according to users' fact-checking behavior, where each value of 1 for $y_{ij}$ indicates the existence of implicit interaction between target user $i$ and target URL $j$. Each user $u_i$ and each URL $c_j$ associate with a set of attributes. The goal of the recommendation task is to recommend top-N URLs from the URL set $\mathcal {C}$ to each user. We also construct the entire dataset as a heterogeneous graph, which is a special kind of information network that consists of either multiple types of objects or different types of links, or both. Problem Formulation ::: Definition 2 (Heterogeneous Network) @!START@BIBREF36@!END@ Formally, consider a heterogeneous graph $\mathcal {G}=(\mathcal {V},\mathcal {E})$, where $\mathcal {V} (|V|= m + n)$ and $E$ denote the node set and edge set, respectively. The heterogeneity represents by the node type mapping function: $\phi : \mathcal {V} \rightarrow \mathcal {A}$ and edge type projection function: $\psi : \mathcal {E} \rightarrow \mathcal {R}$, where $\mathcal {A}$ and $\mathcal {R}$ denote the sets of predefined node types and edge types, and $|\mathcal {A}| + |\mathcal {R}| > 2$. Note that we does not consider self-loop in our graph construction. Problem Formulation ::: Definition 3 (Multi-relational Context) Given target user $i$, we define his following fact-checkers and co-occurrenced fact-checkers as his social context user neighbors and co-occurrenced context user neighbors, respectively. Similarly, we name the other URLs posted by target user $i$ and co-occurrenced URLs of target URL $j$ as historical context URL neighbors and co-occurrenced context URL neighbors, respectively. In general, we call all the context neighbors as multi-relational context of given target user-URL pair. Problem Formulation ::: Example Figure FIGREF12 illustrates the multi-relational context. In Figure FIGREF12, $c_1$, $c_2$, $c_3$ represents fact-checking URLs and $u_1$, $u_2$, $u_3$ are users who involve sharing these URLs. For example, $(u_1 \rightarrow u_2)$ indicates the social relationship between $u_1$ and $u_2$. Intuitively, we care more about the influence of $u_2$ on $u_1$. $(u_1 \rightarrow c_1 \leftarrow u_2)$ means $u_1$ and $u_2$ are co-occurrenced user neighbors. Similarly, we name $c_1$ and $c_2$ as co-occurrenced URL neighbors of $u_3$, and $c_2$ is historical context URL neighbor given target $u_3$-$c_3$ pair. Proposed Framework We propose a novel framework called Attributed Multi-Relational Attention Network (AMRAN), to understand the influence of the multi-relational context to target user's fact-checking behavior. In this section, we elaborate our proposed AMRAN with using notations described in Table TABREF15. At the high level, AMRAN is composed of two modules as shown in Figure FIGREF16: (i) a convolutional spatial attention network (CSAN) and (ii) a heterogeneous graph attention network (HGAN). CSAN jointly models the influence of multi-relational context on target user-URL pair (Section 4.1). It enriches the neighborhood diversity, and expands the scope of information reception. HGAN leverages both global node connectivity and local node attributes, in order to incorporate the effect of information propagation and encode user's dynamic preference in depth (Section 4.2). At the final step, the model produces recommendations by combining wide context-aware target user embedding and URL embedding, multi-relational context user embedding and context URL embedding, and deep context-aware user embedding and URL embedding (Section 4.3). Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) The left bounding box in Figure FIGREF16 illustrates the structure of CSAN module. To provide a broad scope of knowledge for generating wide context-aware target user embedding and URL embedding, we adopt a multi-branch setting in CSAN. The two parallel branch models multi-relational context for target user and target URL respectively. Each branch contains two identical streams. We select $b_h$ context neighbors for each stream (e.g., historical context URL neighbors and co-occurrenced context URL neighbors of target URL, social context user neighbors and co-occurenced user neighbors of target user). These streams are employed to learn the most discriminative features from multi-relational neighbors of target user and target URL. Then we employ a gated fusion layer to capture the optimal global level representation of target user-URL pair. Note that we enable the embedding sharing within each branch as users/URLs share the same feature set. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Raw Attribute Input User and URL associate with different feature sets. Therefore, CSAN starts from embedding the input attribute set of each context neighbor. We use $s$ and $t$ to denote the number of features related to user and URL, respectively. Note that the dimension of initial embedding for each attribute could be different since they may carry with different information volume. We use one-hot encoding for categorical feature inputs, and apply direct lookup on these features. However, the same solution performs poorly when it comes continuous attributes such as the post frequency of an URL. Empirically, we found that an available solution is to bucketize these features into small intervals. Specifically, we map these continuous attributes in range $[0,1), [1,2),..., [2^k, 2^{k+1})$ into $0,1,..., k$ in this work. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Attribute Embedding Layer We then project them into the same latent space via a set of attribute-specific transformation matrices $W_1, W_2, ..., W_{s+t}$ to project all the attributes into a $w$-dimensional space. The attributes of each neighbor then are stacked as a matrix in shape of $s \times w$ for users and $t \times w$ for URLs. However, we treat the target user-URL pair differently. After projecting attributes by the same attribute-specific transformation matrix as their relational neighbors, instead of stacking them as a matrix, we concatenate the attribute embedding vectors together and feed it through a linear projection to generate $u^{\prime }_i \in \mathbb {R}^d$ and $c^{\prime }_j \in \mathbb {R}^d$ for future reference. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Spatial Attention Block To prevent some unknown misalignment and conduct better comparison among the neighborhood features, we proposed a schema for jointly learning the layer-wise and channel-wise attention. In particular, for each stream, we pile the neighbors' representation matrices together to obtain a 3-dimensional tensor $M$. Intuitively, the design helps improve the alignment quality of neighbor's features. Then, inspired by BIBREF37, BIBREF38, we employ a spatial attention block in each stream for jointly learning channel-level and layer-level soft attention. See figure FIGREF21 for a high-level illustration of our spatial attention block. All the streams adopt identical spatial attention blocks, and each block attends the input attribute representations independently. In the figure, we use the historical context URL stream for illustration. The output of spatial attention block is an attention weight map $S \in \mathbb {R}^{t \times w \times b}$ which is in the same shape with the input tensor $M$. Intuitively, the layer-wise attention and channel-wise attention are dedicated to selecting the most discriminative features and the most important neighbors, respectively. Thus, they are highly complementary to each other in functionality; and we adopt a factorized manner for optimization and computational efficiency as: where $L \in \mathbb {R}^{t \times w \times 1}$ and $C \in \mathbb {R}^{1 \times 1 \times b}$ denote the layer-wise feature map and channel-wise feature map, respectively. $S$ is the result of tensor multiplication. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Spatial Attention Block ::: Layer-wise Attention Conceptually, the layer-wise attention learns globally important elements in the feature. We apply a cross-channel average pooling operation onto the input tensor, following by 2 convolution layers of $3 \times 3$ and $1 \times 1$ filter, respectively. Specifically, cross-channel average pooling operation is defined as: where $b$ is the number of selected neighbors. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Spatial Attention Block ::: Channel-wise Attention The design of channel-wise attention is very similar to layer-wise attention, which aims to acquire a global view of discriminative users. Formally, the global average pooling is defined as: where $t$ and $w$ are shared height and width of all channels. Similarly, we employ two convolution layers after the pooling operation. Note that each convolution layer was followed by batch normalization operation. Furthermore, as other work of modern CNN structure BIBREF39, we append a ReLU activation function to assure $L>0, C>0$. We further introduce one more convolution layer of $1 \times 1 \times b$ filter for enhancing the fusion of the layer-wise attention and channel-wise attention. The output tensor then is fed through a sigmoid function for normalization and generate the final attention weight tensor of spatial attention block. Formally, the output of the spatial attention module is the element-wise product of initial feature tensor $M$ and generated attention weights $S$: Intuitively, the attended feature map learned fine-grained important elements via high alignment and compatible attentions. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Gated Branch Fusion Layer We apply another CNN layer of $3 \times 3$ filter after the attended user representation of each stream for feature extraction and dimension : which produces the multi-relational context representation vectors: $o_{i_h}, o_{i_c}, o_{u_f}$ and $o_{u_c}$ for each stream, respectively. We employ a gated mechanism to assigns different weights to relation-specific neighborhood representation as: where scalars $g_u$ and $g_v$ are learned automatically to control the importance of the two streams within each branch. Proposed Framework ::: Heterogeneous Graph Attention Network (HGAN) Following recent success in Graph Convolutional Network (GCN) BIBREF32, BIBREF33, BIBREF40, BIBREF34, BIBREF35. We propose a heterogeneous graph attention network (HGAN) which is tailored for recommendation task. In particular, our proposed module adopts a parallel attention structure for the user neighbor and the URL neighbor of the central node, respectively. Considering a heterogeneous graph $\mathcal {G}=(\mathcal {V},\mathcal {E})$, the nodes represent objects in this network which can be either user or URL. The edges denote the relation between connected nodes. The node attributes pass along the edges during the propagation. We try to leverage between the local node attributes and global network structure. Our novelty lies in two aspects: (i) we differentiate the contribution of URL node and user node, respectively; and (ii) we consider both similarities of node and the influence of different relation types. While the CSAN obtains information from multi-relational immediate neighbors, which expand the scope of knowledge for target user and target URL representations, HGAN aims at learning deeper semantic representations of target user and target URL. Proposed Framework ::: Heterogeneous Graph Attention Network (HGAN) ::: Heterogeneous Graph Network We try to capture different semantic relation behind various types of nodes and edges. For every single layer, if the central node is user node, its neighborhood contains its co-occurrenced users and posted URLs. If the central node type is URL, its neighborhood nodes consist of users who posted it and its co-occurrenced URLs. We adopt similar embedding approach as we did in CSAN for the initial representation of each node, but we concatenate all the features into a long vector $x_i$ for each node instead of stacking them as a matrix. Considering the different types of the node associated with the varied feature set, we use a set of node type-specific transformation matrices to project different types of node representation into the same feature space before aggregation as follows: Let $H^{(0)} \in \mathbb {R}^{(m+n) \times d}$ be the embedding matrix of all the attributed nodes, where $m+n$ is the total number of nodes and d is the dimension of latent embedding space; each row $h_i^{(0)}$ stands for the initial embedding vector of node $i$. We define edges based on users' reference of URL (user-URL edges), user co-occurrence relation (user-user edges), and URL co-occurrence (URL-URL edges). We then introduce an adjacency matrix $A$ of $\mathcal {G}$ based on the importance of each edge. In particular, to compute the weight of user-user edges and URL-URL edges, we adopt a matrix named Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41, a popular measure for word associations, to utilize the co-concurrence context information. In word embedding scenario, each cell within the matrix measures the relation of corresponding word-context pair. The factorization of such matrix is proved to be equivalent to skip-gram model with negative sampling (SGNS). The Point-wise Mutual Information (PMI) between node $i$ and node $j$ is computed as $PMI(i,j) = log \frac{P(i,j)}{P(i)P(j)}$ where $P(i,j) = \frac{\# (i,j)}{|D|}$ and $P(i) = \frac{\# (i)}{|D|}$. $|D|$ denotes the total number of observed word-context pairs within a predefined sliding window. $P(i,j)$ is the joint probability that word $i$ and word $j$ appear together within the window size. Furthermore, we introduce the SPPMI matrix as an extension based on PMI value: where $k$ is a hyperparameter, which represents the number of negative samples. Conceptually, a positive PMI value implies a semantically correlated word-context pair, Therefore, SPPMI, which only takes the positive value of PMI shifted by a global constant, reflects a closer semantic relation between word-context pairs. Inspired by this concept/idea, we use $|D|$ to denote the number of times of user (URL) co-occurrence and generate the user co-occurrence matrix in shape of $n \times n$ and URL co-occurrence matrix of $m \times m$. Note that we do not discriminate between the target node and context node. Similarly, we learn from the TF-IDF concept and redefine it on recommendation task with implicit feedback BIBREF42 as: where $\# (i,j)$ represents the number of times URL $j$ be posted by user $i$. $TF_{ij}$ further normalizes it by the maximum number of post times of any URL by user $i$. The $IDF_i$ is associated with the user's previous behavior as $m$ denotes the total number of URLs and $m_i$ is the number of URLs posted by user $i$. Formally, the weight of the edge between node $i$ and node $j$ is defined as: Proposed Framework ::: Heterogeneous Graph Attention Network (HGAN) ::: Heterogeneous Attention Layer (HGAL) Given the node's initial representation defined as above, we then pass messages to aggregate the neighborhood nodes' information and combine it with the target user's interests. A popular propagation strategy in existing GCN works is the normalized Laplacian matrix BIBREF32. Even though it proves to be effective, it is not trainable and it assigns every adjacent node with the same weight. Following previous work BIBREF35, we propose to incorporate a hierarchical attention mechanism to learn the weight of each adjacent node adaptively. Since the distribution of the number of neighbors of each node disperses greatly, sub-sampling becomes an essential procedure in our task to avoid an explosion of computation cost after multiple hops stacked. We adopt Weighted Random Selection (WRS) BIBREF43 to select a fixed number of nodes for both node types in each graph attention layer. Figure FIGREF40 shows a graphical illustration of one HGAL. Assume that the central node is a user node. We separately calculate the attention weights between the user node and its user node neighbors, or between the user node and its URL node neighbors. The similarity between the target user's node representation $h^{(l)}_u$ and all of its selected neighbors are defined as: where $h^{(l)}_i$ is the representation of user $i$ at layer $l$, and $\mathcal {N}^{\phi _t}_i$ denotes the node type-based neighbor. We adopt $f(h^{(l)}_i,h^{(l)}_j)=cosine(h^{(l)}_i,h^{(l)}_j)$ as similarity function. Intuitively, $\alpha ^{\phi }_{ij}$ measures the importance of neighbor $j$ towards central node $i$. Meanwhile, we obtain the edge weight $A_{ij}$ as well. After this, we aggregate the type-based neighborhood node representation and generate the embedding of neighborhood as the average of different types of nodes: To model the information propagation and capture higher-order relations, we stack the HGAL multiple times. In addition, we introduce the residual connection BIBREF44 to help train a HGAN with many layers. where $\sigma $ denotes the sigmoid function. $W_g^{(l)}$ and $b_g^{(l-1)}$ are the shared weight matrix and bias term at layer $l$, respectively. The node representation at $l$-th layer provides knowledge of $l$ degrees away. Proposed Framework ::: Interaction Layer The interaction layer is tailored for recommendation tasks. Recall that we obtained wide context-based user embedding $u^{\prime }_i$ and URL embedding $c^{\prime }_j$, context representations $p_i$, $p_j$ and deep context-based user embedding $h^{(l)}_i$ and URL embedding $h^{(l)}_j$ in the previous sections. Then we formulate the final URL-dependent user representation by using a fully connected layer as: where $W_o$ and $b_o$ are a linear transformation weight matrix and bias term, respectively. $\oplus $ denotes vector concatenation. Note that the fully-connected layer can be replaced by other techniques (e.g. CNN). Finally, we feed it through a softmax function to calculate the probability that user interested in the given URL. Proposed Framework ::: Training We adopt the cross-entropy loss function during the training process. We follow a uniform sampling strategy to obtain negative samples $(i,j) \in Y^{-}$ from unobserved interactions. Since the entire architecture is differentiable, we use back propagation to achieve end-to-end training. Evaluation In this section, we describe a dataset, baselines, experimental setting, and experimental results. In the experiments, we seek to answer the following research questions: RQ1: What is the performance of our model and baselines? RQ2: How beneficial is each submodule of our model? RQ3: How effective is our attention mechanisms? RQ4: What is sensitivity of our model with regard to hyperparameters? Evaluation ::: Dataset We evaluate our proposed model on a Twitter dataset obtained from the authors of BIBREF12. The interaction behavior collected in the dataset is consistent with our definition in SECREF3. As they did for their study, we only kept users who have at least three interactions (i.e., posting at least three fact-checking messages containing fact-checking URLs). We conducted additional preprocessing step by removing users whose posts are non-English, or their tweets were inaccessible, because some of our baselines require a fact-checker's tweets. Our final dataset consists of 11,576 users (i.e, fact-checkers), 4,732 fact-checking URLs and 63,429 interactions. The dataset also contains each user's social network information. Note that each user's social relationship is restricted within available users in the dataset. And we further take available feature values of both user and URL into consideration. For instance, a category of referred fact-checking article and the name of corresponding fact-checking website reveals linguistic characteristics such as writing style and topical interest of each URL; while the number of followers and number of followees of each user indicates the credibility and influence of the fact-checker. Statistics of the final dataset is presented in Table TABREF65. Evaluation ::: Baselines To measure relative effectiveness of our model, we compare our model against eight state-of-the-art baselines including the traditional collaborative filtering method, neural network-based models, and context-aware approaches. MF BIBREF45 is a standard collaborative filtering technique. It factorizes an interaction matrix $X \in \mathbb {R}^{M \times N}$ into two matrices $U \in \mathbb {R}^{M \times d}$ and $X \in \mathbb {R}^{d \times N}$. $U$ contains each user's latent representation, and $X$ contains each URL's latent representation. GAU BIBREF12 is a framework specifically designed for fact-checking URL recommendation utilizing rich side information such as a user' social network, tweets, and referred fact-checking pages. It is the most relevant and domain-specific baseline. NeuMF BIBREF20 is a neural network based item recommendation algorithm. We adopted a composite version of MF jointly coupled with a MLP. CMN BIBREF30 combines a global latent factor model with an augmented memory network to capture personalized neighbor-based structure in a non-linear fashion. NAIS BIBREF31 is an item-based collaborative filtering architecture that integrates attention mechanism to distinguish the contribution of previously consumed items. The authors proposed two versions of NAIS: (1) $NAIS_{concat}$ which concatenates two vectors to learn the attention weight; and (2) $NAIS_{prod}$ which feeds the element-wise product of the two vectors to the attention network. Therefore, we also build two versions of NAIS, and compare them with our model. DeepCoNN BIBREF46 was originally proposed for an item rating prediction task which jointly model user and item based on their textual reviews. The prior work shows that it significantly outperforms other topic modeling based methods.We re-implemented the baseline and adapted it for our recommendation task with implicit feedback. NARRE BIBREF47 is a deep neural network based framework for a item rating prediction task. It employs the attention mechanism to distinguish the importance of each review. We re-implemented the framework for our implicit feedback situation. NGCF BIBREF48 is a new recommendation framework based on graph neural network, explicitly encoding the collaborative signal in the form of high-order connectivity in user-item bipartite graph by performing embedding propagation. Table TABREF66 presents characteristics of baselines and our model, showing what information each model utilizes. Note that even though CMN and NAIS both utilize co-occurrence context, CMN only utilizes user co-occurrence context whereas NAIS looks into URL co-occurrence context. Evaluation ::: Evaluation Protocol We adopt the leave-one-out evaluation protocol to evaluate the performance of our model and baselines. The leave-one-out evaluation protocol has been widely used in top-K recommendation tasks. In particular, we held the latest interaction of each user as the test set and used the remaining interactions for training. Each testing instance was paired with 99 randomly sampled negative instances. Each recommendation model ranks the 100 instances according to its predicted results. The ranked list is judged by Hit Ratio (HR) BIBREF49 and Normalized Discount Cumulative Gain (NDCG) BIBREF50 at the position 10. HR@10 is a recall-based metric, measuring the percentage of the testing item being correctly recommended in the top-10 position. NDCG@10 is a ranked evaluation metric which considers the position of the correct hit in the ranked result. Since both modules in our framework introduce randomness, we repeat each experiment 5 times with different weight initialization and randomly selecting neighbors. We report the average score of the best performance in each training process for both metrics to ensure the robustness of our framework. Evaluation ::: Hyper-parameter Settings We implement our framework by using Pytorch framework, initialize weight parameters by Xavier initialization BIBREF51, and optimize the model with Adam optimizer BIBREF52. The mini-batch size is set to 128. Empirically, in CSAN, we select 10 neighbors for each stream. In HGAN, we choose 8 user neighbors and 8 URL neighbors for each central node at a single layer, and the default number of graph attention layers is set to 2. If the object (i.e.g, user neighbor or URL neighbor) is not sufficient enough, we pad the sequence with zeros vectors. In the proposed AMRAN model, all hyperparameters are tuned by using the grid-search on the validation set, which is formed by holding out one interaction of each user from the training data like the prior work BIBREF20. We conduct the grid search over a latent dimension size from {8,16,32,64}, a regularization term from {0.1, 0.01, 0.001, 0.0001, 0.00001}, a learning rate from {0.0001, 0.0003, 0.001, 0.01, 0.05, 0.1}, and SPPMI shifted constant value $s$ from {1, 2, 5, 10}. The number of negative samples w.r.t each positive interaction is set to 4. We adopt the same latent dimension size for all sub-modules. For a fair comparison, we also thoroughly optimize the baselines' hyperparameters by using the validation set. Evaluation ::: RQ1: Performance of Our Model and Baselines Table TABREF70 presents performance of our model and baselines. According to the results and information described in Table TABREF66, we had the following observations. First, deep learning-based approaches usually obtained better performance than traditional models (e.g., MF and GAU). This observation makes sense because (1) traditional models failed to capture the important non-linear relationship between users and fact-checking URLs; (2) Most deep-learning based baseline models employ attention mechanism which helps better understand the semantic relation between user and URL; and (3) training tricks such as drop out and batch normalization also contribute to a better quality of training. In particular, $NAIS_{concat}$ achieves better performance than $NAIS_{prod}$ which supports the reason (1). The second observation is that models with text review achieve better results compared with collaborative filtering-based methods. It is not surprising since that textual content contains rich information which could be auxiliary information to implicit feedback data and thus improve the recommendation accuracy. However, we observed that text-based recommendation approaches usually have a high complexity. Third, social context and co-occurrence context play important roles in improving recommendation results. NAIS significantly outperforms CMN and becomes the strongest baseline model. It indicates that URL-URL co-occurrence relationship is more important than user-user co-occurrence relationship since semantic representation of each user is much complex than semantic representation of a fact-checking URL. Overall, our AMRAN outperforms all baselines, achieving 0.657 HR@10 and 0.410 NDCG@10. It improves HR@10 by 5.3% and NDCG@10 by 3% over the best baseline (i.e., $NAIS_{concat}$). Evaluation ::: RQ2: Effectiveness of our submodules In this experiment, we are interested in measuring effectiveness of our submodules of AMRAN: CSAN and HGAN. Table TABREF71 the experimental result. CSAN achieves 0.642 HR@10 and 0.387 HR@10, whereas HGAN achieves 0.653 HR@10 and 0.403 NDCG@10. Both of the submodules outperform all the baselines in HR@10. HGAN outperforms all the baselines, and CSAN is competitive over the baselines. This experimental result confirms that both CSAN and HGAN positively contributed to the performance of our AMRAN. Evaluation ::: RQ3: Effectiveness of our Attention Mechanisms We proposed two attention mechanisms: (1) spatial attention block in CSAN; and (2) graph attention mechanism in HGAN described in Section SECREF4. In this experiment, we are interested in studying the impact of the attention mechanisms. In particular, we run each submodule of AMRAN (i.e., CSAN or HGAN) with/without a corresponding attention mechanism. Table TABREF74 shows performance of these models. In both submodules, our proposed attention mechanisms positively improved the performance of these submodules, confirming the positive impact toward correctly recommending fact-checking URLs. Evaluation ::: RQ4: Hyperparameter Sensitivity Now, we turn to analyze how our model is sensitive to hyperparameter values, and which hyperparameter value produces the best recommendation result. Recall that we utilize the context information to generate comprehensive embedding of given user and URL. In CSAN, we employ four streams to capture fine-grained context characteristics and share the embedding weight matrix with the target user and target URL representations. In the first experiment, we vary the number of neighbors associated with each steam in CSAN to show how CSAN's performance is changed. Figure FIGREF76 shows that both $HR@10$ and $NDCG@10$ have similar trends, and selecting 10 neighbors at each stream produced the best result. Next, we measure how performance of HGAN is changed when varying the number of HGALs and a size of selected neighbor nodes at each layer. Figure FIGREF77 demonstrates the necessity of employing 2 HGALs, which consistently outperforms the one HGAL. The best performance was achieved when a size of selected neighbor nodes was set to 8. In addition, we vary the number of negative samples, and a size of latent semantic space for the target user and target URL (i.e., an embedding vector size of the target user and target URL). Figure FIGREF78 shows high dimensional latent semantic space produces high performance of AMRAN. 64 dimensional embeddings produced the best results. We also observe that one negative sample would not be enough to produce good results in especially when an embedding vector size is small. The top performance is achieved when one positive instance paired with 3 or 4 negative instances. Evaluation ::: Case Study: Visualization of Relevance Propagation Attention mechanism not only improve recommendation performance of our model, but also provide explainability of our model. As a case study, we specifically chose an example to demonstrate relevance propagation. In particular, we randomly sampled a user 7849 as the example as shown in Figure FIGREF80. The user 7849 has 3 co-occurrenced users, 3 following users, and posted 4 URLs. Note that we omit less important 2nd-degree neighbors for simplicity. The most relevant neighbors and the propagation paths are highlighted automatically via the attention mechanism. In general, based on the user's historical context URLs, we observe that the topic that user 7849 would like to participate in debunking is fauxtography. However, in this very particular case, the most influential context neighbors of the user are user 25 (co-occurrence user) and user 4759 (social context) given URL 1623. Both of the context neighbors share the similar taste with user 7849 on the favorite website (Politifact.com). Moreover, we found that URL 2525 appeared in 2nd-degree neighborhood of the user 7849, and was originated from the same website (Snopes.com) with URL 1623. Conclusion In this paper, we proposed a novel framework, which effectively recommends relevant fact-checking URLs to fact-checkers. The proposed framework inspired by recent advancements in graph neural network and attention mechanism leveraged user-URL specific context information to capture deep semantic and complex structure between target user and target URL. We compared the performance of our model, AMRAN, with eight state-of-the-art baselines. Experimental results showed that our model achieved up to 5.3% improvement against the best baseline. Both submodules of AMRAN positively contributed to the recommendation results. This work was supported in part by NSF grant CNS-1755536, AWS Cloud Credits for Research, and Google Cloud. Any opinions, findings and conclusions or recommendations expressed in this material are the author(s) and do not necessarily reflect those of the sponsors.
Overall, our AMRAN outperforms all baselines, achieving 0.657 HR@10 and 0.410 NDCG@10.
8b11bc3a23932afe7d52c19deffd9dec4830f2e9
8b11bc3a23932afe7d52c19deffd9dec4830f2e9_0
Q: How do the authors define fake news? Text: Introduction While social media sites provide users with the revolutionized communication medium by bringing the communication efficiency to a new level, they can be easily misused for widely spreading misinformation and fake news. Fake news and misinformation have been a long-standing issue for various purposes such as political propaganda BIBREF0 and financial propaganda BIBREF1. To fight against fake news, traditional publishers employed human editors to manually and carefully check the content of news articles to maintain their reputation. However, social media provided a new way to spread news, which lead to broader information sources and expanded audience (i.e., anyone can be a media and create news). In particular, users share news articles with their own opinion or read articles shared by their friends from whatever the source of news is with mostly blind trust BIBREF2 or with their own ideologies BIBREF3, BIBREF4. Although social media posts usually have a very short life cycle, the unprecedented amount of fake news may lead to a catastrophic impact on both individuals and society. Besides from misleading users with false information BIBREF4, widely propagated fake news could even cause trust crisis of entire news ecosystem BIBREF5, even further affecting both the cyberspace and physical space. In literature, researchers focused on four topics regarding fake news: characterization (i.e., types of fake news), motivation, circulation, and countermeasures BIBREF6, BIBREF7. A large body of work has been done on fake news identification BIBREF5, BIBREF8, BIBREF9, BIBREF10 by exploiting multiple content-related and social-related components. However, we notice that the fake news still has been widely spread even after early detection BIBREF11. Therefore, we propose to study a complementary approach to mitigate the spread and impact of fake news. Recently, community and journalists started building and maintaining fact-checking websites (e.g., Snopes.com). Social media users called fact-checkers also started using these fact-checking pages as factual evidences to debunk fake news by replying to fake news posters. Figure FIGREF1 demonstrates a real-world example of a fact-checker's fact-checking behavior on Twitter by debunking another user's false claim with a Snopes page URL as an evidence to support the factual correction. In BIBREF12, researchers found that these fact-checkers actively debunked fake news mostly within one day, and their replies were exposed to hundreds of millions users. To motivate these fact-checkers further quickly engage with fake news posters and intelligently consume increased volume of fact-checking articles, in this paper we propose a novel personalized fact-checking URL recommender system. According to BIBREF13, co-occurrence matrix within the given context provides information of semantic similarity between two objects. Therefore, in our proposed deep-learning based recommender system, we employ two extended matrices: user-user co-occurrence matrix, and URL-URL co-occurrence matrix to facilitate our recommendation. In addition, users tend to form relationships with like-minded people BIBREF14. Therefore, we incorporate each user's social context to capture the semantic relation to enhance the recommendation performance. Our main contributions are summarized as follows: We propose a new framework for personalized fact-checking URL recommendation, which relies on multi-relational context neighbors. We propose two attention mechanisms which allow for learning deep semantic representation of both a target user and a target URL at different granularity. Experimental results show that our proposed model outperforms eight state-of-the-art baselines, covering various types of recommendation approaches. Ablation study confirm the effectiveness of each component in our proposed framework. Related Works In this section, we briefly review related works and position our work within the following areas: (1) fake news and misinformation; (2) advancements in recommender systems; and (3) graph convolutional networks. Related Works ::: Fake News and Misinformation Fake news has attracted considerable attention since it is related to our daily life and has become a serious problem related to multiple areas such as politics BIBREF0 and finance BIBREF1. Social media sites have become one of popular mediums to propagate fake news and misinformation. The dominant line of work in this topic is fake news detection BIBREF15 which was mostly formulated as a binary classification problem. Researchers began to incorporate social context and other features for identifying fake news at an early stage and preventing it from diffusion on the social network BIBREF5, BIBREF7. Some other researchers focus on investigating the propagation patterns of fake news in social network BIBREF16, BIBREF17. BIBREF18 also studied fake news intervention. Unlike most previous works, we follow the direction of BIBREF12 and propose to build a personalized recommender system for promoting the fact-checking article circulation to debunk fake news. Related Works ::: Advancements in Recommender System Traditionally, recommendation algorithms can be divided into two categories: collaborative filtering BIBREF19 and content-based filtering. However, in the past few years, the recommendation has become a more integrated task due to the success of the deep neural network. Neural Networks (NNs) proves to be effective to capture underlying nonlinear relations BIBREF20. Another advantage is that the NNs enhanced the model's capability of extracting knowledge from multimodal data BIBREF21, BIBREF22, BIBREF23, which serves as auxiliary information and provide solutions to address the data sparsity problem. More recently, researchers introduced attention mechanism into recommender systems, which has achieved great success in various fields BIBREF24, BIBREF25. Researchers developed multiple variants of attention mechanism to improve both the recommendation precision and model interpretability BIBREF26, BIBREF27, BIBREF28, BIBREF29. In this paper, we also propose two novel designs of attention mechanism. Following BIBREF30, BIBREF31, we further explore multi-relational context of given user-URL pair, aiming at discriminating the most important elements towards URL-dependent user preference. Related Works ::: Graph Convolutional Networks With the surge of Graph-based Neural Network, GCN-based approaches have shown strong effectiveness on various tasksBIBREF32, BIBREF33, BIBREF34, including recommender system. The core idea is to iteratively aggregate attributed node vectors around each node, and messages propagates by stacking multiple layers. However, the original design of GCN is not suitable for our scenario because of the following reasons: First, existing GCN works BIBREF33, BIBREF34 do not distinguish different types of nodes, whereas in our case, it does not make sense to aggregate user and URL nodes together. And the aggregation function proposed in most GCN works treats all its adjacency nodes with the same importance. It is inappropriate in real-world applications and probably tends to neglect necessary information. BIBREF35 breaks this schema by using a multi-head attention mechanism to replace the convolution-like operator, yet it requires significant extra computation and memory. Compared to the previous works, in this paper, we focus on a novel application and investigate both co-occurrence context and social context related influences for fact-checking URL recommendation. We also incorporate sets of auxiliary attributes, which enable more comprehensive learning of the compatibility between given pairs of user and URL. Moreover, we take advantage of advancements in graph neural networks and attention mechanisms, and solve the aforementioned research problems. Problem Formulation We formally introduce definitions before describing our proposed framework. We define fact-checking behavior as a user (i.e., fact-checker) embeds a fact-checking URL in his reply in order to debunk fake news. We regard each fact-checking behavior as an implicit interaction between target user $i$ and target URL $j$. Problem Formulation ::: Definition 1 (Fact-checking URL Recommendation Task) Let $\mathcal {U} = \lbrace u_1,u_2,...,u_n\rbrace $ denotes a set of fact-checkers on social media, and use $\mathcal {C} = \lbrace c_1,c_2,...,c_m\rbrace $ to index fact-checking URLs. We construct user-URL interaction matrix $Y = \lbrace y_{ij} | u\in \mathcal {U}, v \in \mathcal {C} \rbrace $ according to users' fact-checking behavior, where each value of 1 for $y_{ij}$ indicates the existence of implicit interaction between target user $i$ and target URL $j$. Each user $u_i$ and each URL $c_j$ associate with a set of attributes. The goal of the recommendation task is to recommend top-N URLs from the URL set $\mathcal {C}$ to each user. We also construct the entire dataset as a heterogeneous graph, which is a special kind of information network that consists of either multiple types of objects or different types of links, or both. Problem Formulation ::: Definition 2 (Heterogeneous Network) @!START@BIBREF36@!END@ Formally, consider a heterogeneous graph $\mathcal {G}=(\mathcal {V},\mathcal {E})$, where $\mathcal {V} (|V|= m + n)$ and $E$ denote the node set and edge set, respectively. The heterogeneity represents by the node type mapping function: $\phi : \mathcal {V} \rightarrow \mathcal {A}$ and edge type projection function: $\psi : \mathcal {E} \rightarrow \mathcal {R}$, where $\mathcal {A}$ and $\mathcal {R}$ denote the sets of predefined node types and edge types, and $|\mathcal {A}| + |\mathcal {R}| > 2$. Note that we does not consider self-loop in our graph construction. Problem Formulation ::: Definition 3 (Multi-relational Context) Given target user $i$, we define his following fact-checkers and co-occurrenced fact-checkers as his social context user neighbors and co-occurrenced context user neighbors, respectively. Similarly, we name the other URLs posted by target user $i$ and co-occurrenced URLs of target URL $j$ as historical context URL neighbors and co-occurrenced context URL neighbors, respectively. In general, we call all the context neighbors as multi-relational context of given target user-URL pair. Problem Formulation ::: Example Figure FIGREF12 illustrates the multi-relational context. In Figure FIGREF12, $c_1$, $c_2$, $c_3$ represents fact-checking URLs and $u_1$, $u_2$, $u_3$ are users who involve sharing these URLs. For example, $(u_1 \rightarrow u_2)$ indicates the social relationship between $u_1$ and $u_2$. Intuitively, we care more about the influence of $u_2$ on $u_1$. $(u_1 \rightarrow c_1 \leftarrow u_2)$ means $u_1$ and $u_2$ are co-occurrenced user neighbors. Similarly, we name $c_1$ and $c_2$ as co-occurrenced URL neighbors of $u_3$, and $c_2$ is historical context URL neighbor given target $u_3$-$c_3$ pair. Proposed Framework We propose a novel framework called Attributed Multi-Relational Attention Network (AMRAN), to understand the influence of the multi-relational context to target user's fact-checking behavior. In this section, we elaborate our proposed AMRAN with using notations described in Table TABREF15. At the high level, AMRAN is composed of two modules as shown in Figure FIGREF16: (i) a convolutional spatial attention network (CSAN) and (ii) a heterogeneous graph attention network (HGAN). CSAN jointly models the influence of multi-relational context on target user-URL pair (Section 4.1). It enriches the neighborhood diversity, and expands the scope of information reception. HGAN leverages both global node connectivity and local node attributes, in order to incorporate the effect of information propagation and encode user's dynamic preference in depth (Section 4.2). At the final step, the model produces recommendations by combining wide context-aware target user embedding and URL embedding, multi-relational context user embedding and context URL embedding, and deep context-aware user embedding and URL embedding (Section 4.3). Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) The left bounding box in Figure FIGREF16 illustrates the structure of CSAN module. To provide a broad scope of knowledge for generating wide context-aware target user embedding and URL embedding, we adopt a multi-branch setting in CSAN. The two parallel branch models multi-relational context for target user and target URL respectively. Each branch contains two identical streams. We select $b_h$ context neighbors for each stream (e.g., historical context URL neighbors and co-occurrenced context URL neighbors of target URL, social context user neighbors and co-occurenced user neighbors of target user). These streams are employed to learn the most discriminative features from multi-relational neighbors of target user and target URL. Then we employ a gated fusion layer to capture the optimal global level representation of target user-URL pair. Note that we enable the embedding sharing within each branch as users/URLs share the same feature set. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Raw Attribute Input User and URL associate with different feature sets. Therefore, CSAN starts from embedding the input attribute set of each context neighbor. We use $s$ and $t$ to denote the number of features related to user and URL, respectively. Note that the dimension of initial embedding for each attribute could be different since they may carry with different information volume. We use one-hot encoding for categorical feature inputs, and apply direct lookup on these features. However, the same solution performs poorly when it comes continuous attributes such as the post frequency of an URL. Empirically, we found that an available solution is to bucketize these features into small intervals. Specifically, we map these continuous attributes in range $[0,1), [1,2),..., [2^k, 2^{k+1})$ into $0,1,..., k$ in this work. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Attribute Embedding Layer We then project them into the same latent space via a set of attribute-specific transformation matrices $W_1, W_2, ..., W_{s+t}$ to project all the attributes into a $w$-dimensional space. The attributes of each neighbor then are stacked as a matrix in shape of $s \times w$ for users and $t \times w$ for URLs. However, we treat the target user-URL pair differently. After projecting attributes by the same attribute-specific transformation matrix as their relational neighbors, instead of stacking them as a matrix, we concatenate the attribute embedding vectors together and feed it through a linear projection to generate $u^{\prime }_i \in \mathbb {R}^d$ and $c^{\prime }_j \in \mathbb {R}^d$ for future reference. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Spatial Attention Block To prevent some unknown misalignment and conduct better comparison among the neighborhood features, we proposed a schema for jointly learning the layer-wise and channel-wise attention. In particular, for each stream, we pile the neighbors' representation matrices together to obtain a 3-dimensional tensor $M$. Intuitively, the design helps improve the alignment quality of neighbor's features. Then, inspired by BIBREF37, BIBREF38, we employ a spatial attention block in each stream for jointly learning channel-level and layer-level soft attention. See figure FIGREF21 for a high-level illustration of our spatial attention block. All the streams adopt identical spatial attention blocks, and each block attends the input attribute representations independently. In the figure, we use the historical context URL stream for illustration. The output of spatial attention block is an attention weight map $S \in \mathbb {R}^{t \times w \times b}$ which is in the same shape with the input tensor $M$. Intuitively, the layer-wise attention and channel-wise attention are dedicated to selecting the most discriminative features and the most important neighbors, respectively. Thus, they are highly complementary to each other in functionality; and we adopt a factorized manner for optimization and computational efficiency as: where $L \in \mathbb {R}^{t \times w \times 1}$ and $C \in \mathbb {R}^{1 \times 1 \times b}$ denote the layer-wise feature map and channel-wise feature map, respectively. $S$ is the result of tensor multiplication. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Spatial Attention Block ::: Layer-wise Attention Conceptually, the layer-wise attention learns globally important elements in the feature. We apply a cross-channel average pooling operation onto the input tensor, following by 2 convolution layers of $3 \times 3$ and $1 \times 1$ filter, respectively. Specifically, cross-channel average pooling operation is defined as: where $b$ is the number of selected neighbors. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Spatial Attention Block ::: Channel-wise Attention The design of channel-wise attention is very similar to layer-wise attention, which aims to acquire a global view of discriminative users. Formally, the global average pooling is defined as: where $t$ and $w$ are shared height and width of all channels. Similarly, we employ two convolution layers after the pooling operation. Note that each convolution layer was followed by batch normalization operation. Furthermore, as other work of modern CNN structure BIBREF39, we append a ReLU activation function to assure $L>0, C>0$. We further introduce one more convolution layer of $1 \times 1 \times b$ filter for enhancing the fusion of the layer-wise attention and channel-wise attention. The output tensor then is fed through a sigmoid function for normalization and generate the final attention weight tensor of spatial attention block. Formally, the output of the spatial attention module is the element-wise product of initial feature tensor $M$ and generated attention weights $S$: Intuitively, the attended feature map learned fine-grained important elements via high alignment and compatible attentions. Proposed Framework ::: Convolutional Spatial Attention Network (CSAN) ::: Gated Branch Fusion Layer We apply another CNN layer of $3 \times 3$ filter after the attended user representation of each stream for feature extraction and dimension : which produces the multi-relational context representation vectors: $o_{i_h}, o_{i_c}, o_{u_f}$ and $o_{u_c}$ for each stream, respectively. We employ a gated mechanism to assigns different weights to relation-specific neighborhood representation as: where scalars $g_u$ and $g_v$ are learned automatically to control the importance of the two streams within each branch. Proposed Framework ::: Heterogeneous Graph Attention Network (HGAN) Following recent success in Graph Convolutional Network (GCN) BIBREF32, BIBREF33, BIBREF40, BIBREF34, BIBREF35. We propose a heterogeneous graph attention network (HGAN) which is tailored for recommendation task. In particular, our proposed module adopts a parallel attention structure for the user neighbor and the URL neighbor of the central node, respectively. Considering a heterogeneous graph $\mathcal {G}=(\mathcal {V},\mathcal {E})$, the nodes represent objects in this network which can be either user or URL. The edges denote the relation between connected nodes. The node attributes pass along the edges during the propagation. We try to leverage between the local node attributes and global network structure. Our novelty lies in two aspects: (i) we differentiate the contribution of URL node and user node, respectively; and (ii) we consider both similarities of node and the influence of different relation types. While the CSAN obtains information from multi-relational immediate neighbors, which expand the scope of knowledge for target user and target URL representations, HGAN aims at learning deeper semantic representations of target user and target URL. Proposed Framework ::: Heterogeneous Graph Attention Network (HGAN) ::: Heterogeneous Graph Network We try to capture different semantic relation behind various types of nodes and edges. For every single layer, if the central node is user node, its neighborhood contains its co-occurrenced users and posted URLs. If the central node type is URL, its neighborhood nodes consist of users who posted it and its co-occurrenced URLs. We adopt similar embedding approach as we did in CSAN for the initial representation of each node, but we concatenate all the features into a long vector $x_i$ for each node instead of stacking them as a matrix. Considering the different types of the node associated with the varied feature set, we use a set of node type-specific transformation matrices to project different types of node representation into the same feature space before aggregation as follows: Let $H^{(0)} \in \mathbb {R}^{(m+n) \times d}$ be the embedding matrix of all the attributed nodes, where $m+n$ is the total number of nodes and d is the dimension of latent embedding space; each row $h_i^{(0)}$ stands for the initial embedding vector of node $i$. We define edges based on users' reference of URL (user-URL edges), user co-occurrence relation (user-user edges), and URL co-occurrence (URL-URL edges). We then introduce an adjacency matrix $A$ of $\mathcal {G}$ based on the importance of each edge. In particular, to compute the weight of user-user edges and URL-URL edges, we adopt a matrix named Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41, a popular measure for word associations, to utilize the co-concurrence context information. In word embedding scenario, each cell within the matrix measures the relation of corresponding word-context pair. The factorization of such matrix is proved to be equivalent to skip-gram model with negative sampling (SGNS). The Point-wise Mutual Information (PMI) between node $i$ and node $j$ is computed as $PMI(i,j) = log \frac{P(i,j)}{P(i)P(j)}$ where $P(i,j) = \frac{\# (i,j)}{|D|}$ and $P(i) = \frac{\# (i)}{|D|}$. $|D|$ denotes the total number of observed word-context pairs within a predefined sliding window. $P(i,j)$ is the joint probability that word $i$ and word $j$ appear together within the window size. Furthermore, we introduce the SPPMI matrix as an extension based on PMI value: where $k$ is a hyperparameter, which represents the number of negative samples. Conceptually, a positive PMI value implies a semantically correlated word-context pair, Therefore, SPPMI, which only takes the positive value of PMI shifted by a global constant, reflects a closer semantic relation between word-context pairs. Inspired by this concept/idea, we use $|D|$ to denote the number of times of user (URL) co-occurrence and generate the user co-occurrence matrix in shape of $n \times n$ and URL co-occurrence matrix of $m \times m$. Note that we do not discriminate between the target node and context node. Similarly, we learn from the TF-IDF concept and redefine it on recommendation task with implicit feedback BIBREF42 as: where $\# (i,j)$ represents the number of times URL $j$ be posted by user $i$. $TF_{ij}$ further normalizes it by the maximum number of post times of any URL by user $i$. The $IDF_i$ is associated with the user's previous behavior as $m$ denotes the total number of URLs and $m_i$ is the number of URLs posted by user $i$. Formally, the weight of the edge between node $i$ and node $j$ is defined as: Proposed Framework ::: Heterogeneous Graph Attention Network (HGAN) ::: Heterogeneous Attention Layer (HGAL) Given the node's initial representation defined as above, we then pass messages to aggregate the neighborhood nodes' information and combine it with the target user's interests. A popular propagation strategy in existing GCN works is the normalized Laplacian matrix BIBREF32. Even though it proves to be effective, it is not trainable and it assigns every adjacent node with the same weight. Following previous work BIBREF35, we propose to incorporate a hierarchical attention mechanism to learn the weight of each adjacent node adaptively. Since the distribution of the number of neighbors of each node disperses greatly, sub-sampling becomes an essential procedure in our task to avoid an explosion of computation cost after multiple hops stacked. We adopt Weighted Random Selection (WRS) BIBREF43 to select a fixed number of nodes for both node types in each graph attention layer. Figure FIGREF40 shows a graphical illustration of one HGAL. Assume that the central node is a user node. We separately calculate the attention weights between the user node and its user node neighbors, or between the user node and its URL node neighbors. The similarity between the target user's node representation $h^{(l)}_u$ and all of its selected neighbors are defined as: where $h^{(l)}_i$ is the representation of user $i$ at layer $l$, and $\mathcal {N}^{\phi _t}_i$ denotes the node type-based neighbor. We adopt $f(h^{(l)}_i,h^{(l)}_j)=cosine(h^{(l)}_i,h^{(l)}_j)$ as similarity function. Intuitively, $\alpha ^{\phi }_{ij}$ measures the importance of neighbor $j$ towards central node $i$. Meanwhile, we obtain the edge weight $A_{ij}$ as well. After this, we aggregate the type-based neighborhood node representation and generate the embedding of neighborhood as the average of different types of nodes: To model the information propagation and capture higher-order relations, we stack the HGAL multiple times. In addition, we introduce the residual connection BIBREF44 to help train a HGAN with many layers. where $\sigma $ denotes the sigmoid function. $W_g^{(l)}$ and $b_g^{(l-1)}$ are the shared weight matrix and bias term at layer $l$, respectively. The node representation at $l$-th layer provides knowledge of $l$ degrees away. Proposed Framework ::: Interaction Layer The interaction layer is tailored for recommendation tasks. Recall that we obtained wide context-based user embedding $u^{\prime }_i$ and URL embedding $c^{\prime }_j$, context representations $p_i$, $p_j$ and deep context-based user embedding $h^{(l)}_i$ and URL embedding $h^{(l)}_j$ in the previous sections. Then we formulate the final URL-dependent user representation by using a fully connected layer as: where $W_o$ and $b_o$ are a linear transformation weight matrix and bias term, respectively. $\oplus $ denotes vector concatenation. Note that the fully-connected layer can be replaced by other techniques (e.g. CNN). Finally, we feed it through a softmax function to calculate the probability that user interested in the given URL. Proposed Framework ::: Training We adopt the cross-entropy loss function during the training process. We follow a uniform sampling strategy to obtain negative samples $(i,j) \in Y^{-}$ from unobserved interactions. Since the entire architecture is differentiable, we use back propagation to achieve end-to-end training. Evaluation In this section, we describe a dataset, baselines, experimental setting, and experimental results. In the experiments, we seek to answer the following research questions: RQ1: What is the performance of our model and baselines? RQ2: How beneficial is each submodule of our model? RQ3: How effective is our attention mechanisms? RQ4: What is sensitivity of our model with regard to hyperparameters? Evaluation ::: Dataset We evaluate our proposed model on a Twitter dataset obtained from the authors of BIBREF12. The interaction behavior collected in the dataset is consistent with our definition in SECREF3. As they did for their study, we only kept users who have at least three interactions (i.e., posting at least three fact-checking messages containing fact-checking URLs). We conducted additional preprocessing step by removing users whose posts are non-English, or their tweets were inaccessible, because some of our baselines require a fact-checker's tweets. Our final dataset consists of 11,576 users (i.e, fact-checkers), 4,732 fact-checking URLs and 63,429 interactions. The dataset also contains each user's social network information. Note that each user's social relationship is restricted within available users in the dataset. And we further take available feature values of both user and URL into consideration. For instance, a category of referred fact-checking article and the name of corresponding fact-checking website reveals linguistic characteristics such as writing style and topical interest of each URL; while the number of followers and number of followees of each user indicates the credibility and influence of the fact-checker. Statistics of the final dataset is presented in Table TABREF65. Evaluation ::: Baselines To measure relative effectiveness of our model, we compare our model against eight state-of-the-art baselines including the traditional collaborative filtering method, neural network-based models, and context-aware approaches. MF BIBREF45 is a standard collaborative filtering technique. It factorizes an interaction matrix $X \in \mathbb {R}^{M \times N}$ into two matrices $U \in \mathbb {R}^{M \times d}$ and $X \in \mathbb {R}^{d \times N}$. $U$ contains each user's latent representation, and $X$ contains each URL's latent representation. GAU BIBREF12 is a framework specifically designed for fact-checking URL recommendation utilizing rich side information such as a user' social network, tweets, and referred fact-checking pages. It is the most relevant and domain-specific baseline. NeuMF BIBREF20 is a neural network based item recommendation algorithm. We adopted a composite version of MF jointly coupled with a MLP. CMN BIBREF30 combines a global latent factor model with an augmented memory network to capture personalized neighbor-based structure in a non-linear fashion. NAIS BIBREF31 is an item-based collaborative filtering architecture that integrates attention mechanism to distinguish the contribution of previously consumed items. The authors proposed two versions of NAIS: (1) $NAIS_{concat}$ which concatenates two vectors to learn the attention weight; and (2) $NAIS_{prod}$ which feeds the element-wise product of the two vectors to the attention network. Therefore, we also build two versions of NAIS, and compare them with our model. DeepCoNN BIBREF46 was originally proposed for an item rating prediction task which jointly model user and item based on their textual reviews. The prior work shows that it significantly outperforms other topic modeling based methods.We re-implemented the baseline and adapted it for our recommendation task with implicit feedback. NARRE BIBREF47 is a deep neural network based framework for a item rating prediction task. It employs the attention mechanism to distinguish the importance of each review. We re-implemented the framework for our implicit feedback situation. NGCF BIBREF48 is a new recommendation framework based on graph neural network, explicitly encoding the collaborative signal in the form of high-order connectivity in user-item bipartite graph by performing embedding propagation. Table TABREF66 presents characteristics of baselines and our model, showing what information each model utilizes. Note that even though CMN and NAIS both utilize co-occurrence context, CMN only utilizes user co-occurrence context whereas NAIS looks into URL co-occurrence context. Evaluation ::: Evaluation Protocol We adopt the leave-one-out evaluation protocol to evaluate the performance of our model and baselines. The leave-one-out evaluation protocol has been widely used in top-K recommendation tasks. In particular, we held the latest interaction of each user as the test set and used the remaining interactions for training. Each testing instance was paired with 99 randomly sampled negative instances. Each recommendation model ranks the 100 instances according to its predicted results. The ranked list is judged by Hit Ratio (HR) BIBREF49 and Normalized Discount Cumulative Gain (NDCG) BIBREF50 at the position 10. HR@10 is a recall-based metric, measuring the percentage of the testing item being correctly recommended in the top-10 position. NDCG@10 is a ranked evaluation metric which considers the position of the correct hit in the ranked result. Since both modules in our framework introduce randomness, we repeat each experiment 5 times with different weight initialization and randomly selecting neighbors. We report the average score of the best performance in each training process for both metrics to ensure the robustness of our framework. Evaluation ::: Hyper-parameter Settings We implement our framework by using Pytorch framework, initialize weight parameters by Xavier initialization BIBREF51, and optimize the model with Adam optimizer BIBREF52. The mini-batch size is set to 128. Empirically, in CSAN, we select 10 neighbors for each stream. In HGAN, we choose 8 user neighbors and 8 URL neighbors for each central node at a single layer, and the default number of graph attention layers is set to 2. If the object (i.e.g, user neighbor or URL neighbor) is not sufficient enough, we pad the sequence with zeros vectors. In the proposed AMRAN model, all hyperparameters are tuned by using the grid-search on the validation set, which is formed by holding out one interaction of each user from the training data like the prior work BIBREF20. We conduct the grid search over a latent dimension size from {8,16,32,64}, a regularization term from {0.1, 0.01, 0.001, 0.0001, 0.00001}, a learning rate from {0.0001, 0.0003, 0.001, 0.01, 0.05, 0.1}, and SPPMI shifted constant value $s$ from {1, 2, 5, 10}. The number of negative samples w.r.t each positive interaction is set to 4. We adopt the same latent dimension size for all sub-modules. For a fair comparison, we also thoroughly optimize the baselines' hyperparameters by using the validation set. Evaluation ::: RQ1: Performance of Our Model and Baselines Table TABREF70 presents performance of our model and baselines. According to the results and information described in Table TABREF66, we had the following observations. First, deep learning-based approaches usually obtained better performance than traditional models (e.g., MF and GAU). This observation makes sense because (1) traditional models failed to capture the important non-linear relationship between users and fact-checking URLs; (2) Most deep-learning based baseline models employ attention mechanism which helps better understand the semantic relation between user and URL; and (3) training tricks such as drop out and batch normalization also contribute to a better quality of training. In particular, $NAIS_{concat}$ achieves better performance than $NAIS_{prod}$ which supports the reason (1). The second observation is that models with text review achieve better results compared with collaborative filtering-based methods. It is not surprising since that textual content contains rich information which could be auxiliary information to implicit feedback data and thus improve the recommendation accuracy. However, we observed that text-based recommendation approaches usually have a high complexity. Third, social context and co-occurrence context play important roles in improving recommendation results. NAIS significantly outperforms CMN and becomes the strongest baseline model. It indicates that URL-URL co-occurrence relationship is more important than user-user co-occurrence relationship since semantic representation of each user is much complex than semantic representation of a fact-checking URL. Overall, our AMRAN outperforms all baselines, achieving 0.657 HR@10 and 0.410 NDCG@10. It improves HR@10 by 5.3% and NDCG@10 by 3% over the best baseline (i.e., $NAIS_{concat}$). Evaluation ::: RQ2: Effectiveness of our submodules In this experiment, we are interested in measuring effectiveness of our submodules of AMRAN: CSAN and HGAN. Table TABREF71 the experimental result. CSAN achieves 0.642 HR@10 and 0.387 HR@10, whereas HGAN achieves 0.653 HR@10 and 0.403 NDCG@10. Both of the submodules outperform all the baselines in HR@10. HGAN outperforms all the baselines, and CSAN is competitive over the baselines. This experimental result confirms that both CSAN and HGAN positively contributed to the performance of our AMRAN. Evaluation ::: RQ3: Effectiveness of our Attention Mechanisms We proposed two attention mechanisms: (1) spatial attention block in CSAN; and (2) graph attention mechanism in HGAN described in Section SECREF4. In this experiment, we are interested in studying the impact of the attention mechanisms. In particular, we run each submodule of AMRAN (i.e., CSAN or HGAN) with/without a corresponding attention mechanism. Table TABREF74 shows performance of these models. In both submodules, our proposed attention mechanisms positively improved the performance of these submodules, confirming the positive impact toward correctly recommending fact-checking URLs. Evaluation ::: RQ4: Hyperparameter Sensitivity Now, we turn to analyze how our model is sensitive to hyperparameter values, and which hyperparameter value produces the best recommendation result. Recall that we utilize the context information to generate comprehensive embedding of given user and URL. In CSAN, we employ four streams to capture fine-grained context characteristics and share the embedding weight matrix with the target user and target URL representations. In the first experiment, we vary the number of neighbors associated with each steam in CSAN to show how CSAN's performance is changed. Figure FIGREF76 shows that both $HR@10$ and $NDCG@10$ have similar trends, and selecting 10 neighbors at each stream produced the best result. Next, we measure how performance of HGAN is changed when varying the number of HGALs and a size of selected neighbor nodes at each layer. Figure FIGREF77 demonstrates the necessity of employing 2 HGALs, which consistently outperforms the one HGAL. The best performance was achieved when a size of selected neighbor nodes was set to 8. In addition, we vary the number of negative samples, and a size of latent semantic space for the target user and target URL (i.e., an embedding vector size of the target user and target URL). Figure FIGREF78 shows high dimensional latent semantic space produces high performance of AMRAN. 64 dimensional embeddings produced the best results. We also observe that one negative sample would not be enough to produce good results in especially when an embedding vector size is small. The top performance is achieved when one positive instance paired with 3 or 4 negative instances. Evaluation ::: Case Study: Visualization of Relevance Propagation Attention mechanism not only improve recommendation performance of our model, but also provide explainability of our model. As a case study, we specifically chose an example to demonstrate relevance propagation. In particular, we randomly sampled a user 7849 as the example as shown in Figure FIGREF80. The user 7849 has 3 co-occurrenced users, 3 following users, and posted 4 URLs. Note that we omit less important 2nd-degree neighbors for simplicity. The most relevant neighbors and the propagation paths are highlighted automatically via the attention mechanism. In general, based on the user's historical context URLs, we observe that the topic that user 7849 would like to participate in debunking is fauxtography. However, in this very particular case, the most influential context neighbors of the user are user 25 (co-occurrence user) and user 4759 (social context) given URL 1623. Both of the context neighbors share the similar taste with user 7849 on the favorite website (Politifact.com). Moreover, we found that URL 2525 appeared in 2nd-degree neighborhood of the user 7849, and was originated from the same website (Snopes.com) with URL 1623. Conclusion In this paper, we proposed a novel framework, which effectively recommends relevant fact-checking URLs to fact-checkers. The proposed framework inspired by recent advancements in graph neural network and attention mechanism leveraged user-URL specific context information to capture deep semantic and complex structure between target user and target URL. We compared the performance of our model, AMRAN, with eight state-of-the-art baselines. Experimental results showed that our model achieved up to 5.3% improvement against the best baseline. Both submodules of AMRAN positively contributed to the recommendation results. This work was supported in part by NSF grant CNS-1755536, AWS Cloud Credits for Research, and Google Cloud. Any opinions, findings and conclusions or recommendations expressed in this material are the author(s) and do not necessarily reflect those of the sponsors.
Unanswerable